> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is ""definitely not an 'Arch-based distro!'"" Pacman is not included, and Arch is used only for the base operating system.
So it's basically a SteamOS sibling, just without Steam?
Sounds like a good distro to use with your parents and grand parents, if they're not solely using iPads...
That might be their target audience.
What appeals to me about linux is the hackability and configurability. This takes it all away in some way, but that's not to say that they won't find a market for it.
Seems targeted at office workplaces. A locked-down system that cannot even be corrupted or tampered with. Consider a workplace of a receptionist at a medical office, or a library computer.
Linux is wonderfully flexible, which allows to create distros like that, among other things. Linux is also free as in freedom, which may be very important for trusting the code you run, or which a governmental official runs.
I bet that past the alpha stage they will offer a configuration tool to prepare the images to your liking, and ways to lock the system down even more. Would work nicely over PXE boot.
More and more of this software is moving to the cloud and only requires a web browser.
A distribution that is very difficult to break and can launch a web browser would already meet many use cases for receptionists, hotels, consultation stations, etc.
If only a standard existed to do this... Hint: it exists since ages in Italy and it has been extended to Europe recently (See Registered Electronic Mail - RFC 6109 and ETSI EN 319 532 – 4)
Also the limitations of fax sort of end up being it's differentiator to email and it's biggest advantage. Not needing an email server is a big boon, not really being susceptible to phishing is a boon, and with modern fax over internet it's virtually indistinguishable in user experience from email.
I remember fax phishing even before I had ever heard of email. From many large companies, simply paying a sub $100 invoice was standard procedure without even checking with the other internal bodies.
The United States is not the only country in the world. In France, it is almost impossible to make an appointment without using Doctolib, which is SaaS software for booking consultations (and lots of other things).
Doctolib is not the problem at all. he real problem is the lack of government proactivity on these initiatives.
If the government had already thought about this in advanced (even in 2013 when doctolib was just starting out), then there could be very strong protectiosn for data which would then allay all of these concerns, and we might have had multiple players in this space.
The best use of Doctolib for me is that I can make appointments without having to speak perfect German on phone. I can make appointments in evening when I'm back from office and can relax a little bit. So, doctolib is a godsend for me as an immigrant here. and I'm guessing for a lot of people too. I can look up doctors who are available without having to bother the receptionist. This is much more efficient way of doing things.
Doctolib is a B2B model. Patients are not the customers; medical practices are the customers. Doctolib saves on the cost of a medical secretary, which is why it is so popular.
What's more, this is a sensitive and regulated field, where trust is essential. They can't afford to mess around if they don't want to quickly find themselves subject to moe restrictive regulations.
They were heavily criticised in France because they allowed charlatans and people with no medical training to register (particularly for Botox injections).
As soon as this became known, they quickly rectified the situation.
> It will inevitable be enshitified.
that only happens with the western venture capitalist model in private companies. doctolib makers already have income from all these government contracts instead of just relying on adverts and hype
Not just in the US, they‘re surprisingly popular still here in Switzerland. I‘ve written interfaces to fax gateways (convert incoming fax to pdf, extract metadata, save in DB) multiple times.
Because Chrome OS is offered on low-cost laptops that are unsuitable for office work.
What's more, it's Google, so we're not safe from a ‘Lol, we're discontinuing support for Chrome OS. Good luck, Byeeee.’.
Some offices still have bad memories of Google Cloud Print, for example. I'm not saying that being an early adopter of a distribution that's less than a year old is a good solution. Just that Google's business products don't have a very good reputation.
> Because Chrome OS is offered on low-cost laptops that are unsuitable for office work.
ChromeOS Flex exists, it is free of charge, and it runs on more or less any x86-64 computer, including Intel Macs.
Nordic Choice got hit with ransomeware and rather than paying, just reformatted most of its client PCs with ChromeOS Flex and kept going with cloud services.
Being #2 with tens of millions of users is OK, you know. It doesn't mean you've failed.
Sure it's less popular. It came in under 20 years ago, competing against an entrenched superpower that was already nearly 30 years old back then. It's done pretty well.
The Google Apps for Business bundle has outsold by far ever single FOSS email/groupware stack in existence, and every other commercial rival as well.
Notes is all but dead. Groupwise is dead. OpenXChange is as good as dead. HP killed OpenMail.
My medical devices run Windows due to specialised software. But at my medical office PC I use Linux: EMR and receipts through a web app on browser (locally hosted but it can be cloud), LibreOffice, Weasis Dicom etc
My non-software engineer friends have better things to do than learn Wine, and yet they use it everyday when playing games on their steam deck, unaware of its existence.
And that supplier could decide to bundle their box with such a distro, if this can save them money either due to licencing or better stability (=less support).
It is possible for somebody to make this into a workable bundle targeting specific professions/environments. A doctor would not care if double clicking X icon open an app through wine or not.
Jira on-prem and cloud works just fine on Linux. My experience is support tickets usually go through there. And then calls and stuff are on zoom or maybe teams - both also work on Linux.
That seems like a good niche to exist indeed and many people would probably misunderstand its purpose by it being called a “KDE distribution”. It would perhaps have been better if it were created by some independent group for this purpose and just happened to settle upon KDE as its interface, or rather offer multiple choices to be honest.
No, KDE does not need its own distro, that's the issue. They don't need their own method to distribute it which benefits no one.
The idea of a distribution for this specific purpose is best left in the hands of some organization with experience with this specific purpose, not KDE whose experience is developing desktop environments.
How exactly is it “awkward” for them and how exactly does distributing this in any way improve the development process of KDE? They can't even dogfood it obviously.
Plasma[1] is a desktop environment made by KDE, who also makes lots of other software. They make stuff like Dolphin (file manager), Konsole (terminal emulator), and Partition Manager as OS basics already[2].
It doesn't necessarily take much hackability away. You might find it makes it easier.
You can overlay changes to the read-only rootfs using the sysext mechanism. You can load and unload these extensions. This makes experiments or juggling debug stuff a lot easier than mucking about in /usr used to be.
A lot of KDE Linux is about making updates and even hackability safe in terms of making things trivial to roll back or remove. A goal is to always be able to unwedge without requiring a reinstall.
If you know you can overlay whatever over your /usr and always easily return to a known-good state, hackability arguably increases by lowering the risk.
This overlay feature sounds attractive. It bothers me that there is no easy traceability or undoability when I perform random system-level Ubuntu configuration file edits to make things work on my system. Maybe I'm doing it wrong. Sure I could do the professional sysadmin thing and keep a log book of every configuration change, or maybe switch to NixOS and script all my configuration changes, but something with lower effort would be welcome. Ideally you want the equivalent of "git commit -m<explanation>", "git diff" and "git log" for every change you make to system configuration.
CachyOS and openSUSE have you covered with btrfs and snapper pre-configured to take snapshots before/after doing potentially damaging things (and, of course, you can make them manually, whenever the thought occurs to you that you're entering the "danger zone"). You can boot into a snapshot directly from the boatloader, then rollback if you need to.
Immutable distros just one-up that by trying to steer the system in a direction where it can work with a readonly rootfs in normal operation, and nudging you to take a snapshot before/after taking the rootfs from readonly to read-write. (openSUSE has you covered there as well, if that's your thing; it's called MicroOS).
Both of those distros use KDE by default, so the value-add of KDE having its own distribution is basically so they can have a "reference implementation" that will always have all the latest and greatest that KDE has to offer, and showcase to the rest of the Linux world, how they envision the integration should be done.
If I were to set up a library computer or a computer for my aging parents, I would choose openSUSE Leap Micro with KDE, as that would put the emphasis on stability instead.
I keep my /etc under Git. When the system does changes automatically (via an update or whatever), I make a Git commit with a special distinct message, and so I can easily filter out all my own changes.
This is a major reason I ended up with https://getaurora.dev. I layer a few things, but it comes with bells and whistles (like NVIDIA drivers, if you need that).
I can't see myself going back to a "normal" distro. I don't want to spend time cosplaying a sysadmin, I have things to do on my computer.
I think Aurora Linux[1] is more suitable for this purpose.
However, while I love the approach of having an immutable distribution, I don't see the attack vector of ransomware handled in a good way. It does not help, if your OS is intact, but your data is irrecoverably lost due to a wrong click in the wrong browser on your system.
I think the backup and restore landscape has enough tools to fix this (cloud + restic[2] or automated ZFS snapshots[3]), but it takes a bit time / a script to setup something like this for your parents in your favorite distro.
I am willing to try an image officially supported but definitely I am not building my own to run a computer for my mom given that w10 supports ends, don't have the spoons nor the time for that.
But I guess it is best to have the option that not to have it.
If this is related to the split in Mesa for "Gallium" and "non-Gallium" support, you could try installing the amber branch. Older nvidia video cards are still supported that way.
However, the only distro I could find where it actually worked was Chimera. Not the gaming-related ChimeraOS but the from-scratch LLVM-compiled all-static APK and Dinit distro with a hodgepodge userland ported from the BSDs.
It's rolling release though so it'll happily install the latest bugs. But it probably does that faster than any other distro.
I mean, nothing stops you from building your image of KDE Linux (or any immutable distro) with a built-in restic config.
This is more about preventing the user from messing up their computer than it is about data safety.
I've been using Bazzite for 2 years now (an immutable distro based on Fedora Silver blue) and I just love the fact that I can "unlock" the immutability to try something that could mess up my systemd or desktop environment, and I can just reboot to erase it all away.
I also have a github action to build my custom image with the packages I want, and the configuration I want.
And this makes adding a backup setup even easier, it can be baked-in the distro easily with a custom image ! Your grandparents don't have to do anything, it will auto update and auto apply (and even rollback to the n-1 build if it fails to boot)
No, the main point is they provide a reference image using mkosi, and you can clone kde-linux and trivially make spins. At some point I expect just about everyone is gonna find a spin which scratches all their itches and which they are devoted too.
> I mean, nothing stops you from building your image of KDE Linux (or any immutable distro) with a built-in restic config.
I hear you. The problem is, that basically nothing stops you from building anything yourself. The difference is, that there is no easy-to-use build-in solution (like time machine) and ease of use is what makes the difference. Especially a TIME difference. Of course there is software SIMILAR to time machine, but it seems to be hard to write something rock solid and easy-to-use.
In fact I also have built it myself: https://github.com/sandreas/zarch
A script that installs Arch on ZFS with ZFSBootMenu and preconfigurable "profiles" which packages and aurs to use. Support for CachyOS Kernel with integrated ZFS is on my list.
I already thought putting together a Raspberry PI Image that uses SSH to PULL backups over the network from preconfigured hosts with preconfigured root public keys and is easily configurable via terminalUI, but I did not find the time yet :-) Maybe syncthing just is enough...
> However, while I love the approach of having an immutable distribution, I don't see the attack vector of ransomware handled in a good way
The phylosophy of security in "modern" OSs is to protect the OS from the user. The user is evil and, given so many rights, it will destroy the (holy) OS. And, user data ? What user data ? /s
> Sounds like a good distro to use with your parents and grand parents, if they're not solely using iPads...
THIS!
I was pondering putting Linux on my father's ancient machine (still running Windows7; or migrating him to something slightly newer but win10/win11 doesn't rub me the right way) but I was weary of "something wrong happening" (and I'm away right now).
And having immutable base would be awesome - if something goes wrong just revert back to previous one and voila, everything still works. And he would have less options to break something…
It makes hacking easier in some ways too. Overlay any hacks. It will be gone by reboot unless you want otherwise. Also see blue-build.org <- It helps you to put all your hacks in the immutable image.
> What appeals to me about linux is the hackability and configurability.
Innovation happens on stable foundations, not thru rug pulls.
Yes, you have the freedom to make your system unbootable. When Debian first tried to introduce systemd, I've replaced PID 1 with runit, wrote my own init scripts & service definitions, and it ran like this quite well, until... the next stable release smashed me in the face.
It's absurd how hackable the Linux distros are. It's also absurd to do this to your workhorse setup.
I don’t mean this as a gotcha, but have you tried an immutable/atomic Linux distro?
Immutable/Atomic Linux doesn’t take away any ability to hack and configure it. It’s just a different approach to package and update management.
There really isn’t anything you fans do with it that you can do on other Linux distros.
I’m using Bazzite which is basically in the Fedora Atomic family and all it really changes is that if I want to rpm install something and there’s no flatpak or AppImage then I just need to decide on my preferred alternate method to install it.
At the very worst case I’m using rpm-ostree and installing the software “traditionally” and layering it in with the base OS image.
Now you might be thinking, what’s the benefit of going through all this? Well, I get extremely fast and reliable system updates that can be rolled back, and my system’s personalization and application environment is highly contained to my home directory.
I’m not an expert but I have to think that there are security benefits to being forced into application sandboxing as well. Applications can’t just arbitrarily access data from each other. This isn’t explicitly a feature of immutable/atomic Linux but being forced into installation methods that are not rpm is.
The difference is there's a lot more HN-like users who will go out to run Arch, than ageing people installing who will go out to install Linux instead of getting a iPad/tablet.
If a distribution is immutable (and thus omits the package manager) and pre-configured for a specific purpose (here, ensuring that KDE works), how much does the base really matter?
It does I believe? I've never tried it myself but I've heard multiple voices say that once you go into the terminal the entire Gentoo stack is just there with portage, equery, qapps and such.
In fact, from what I understand it is in fact not really Gentoo based but Portage-based, as in they for the most part write their own ebuilds and software and from what I know have their own custom init system and display system that's not in Gentoo but they found that Portage was simply very convenient for automating their entire process. The claim that “gentoo is just Portage” is not entirely true, there's still a supported base system that's configured as offered by Gentoo but it's far more flexible than that of most systems of course, granting the user choice over all sorts of fundamental system components.
How's Flatpak doing in terms of health of the tech and the project maintenance?
Merely 4 months ago things didn't look too bright... [1]
> work on the Flatpak project itself had stagnated, and that there were too few developers able to review and merge code beyond basic maintenance.
> "you will notice that it's not being actively developed anymore". There are people who maintain the code base and fix security issues, for example, but "bigger changes are not really happening anymore".
Flatpak and Snap always seem to be in the "just give us 6 months and we'll have everything fixed" phase. It's been the same for 7 or 8 years at this point.
On a desktop, I nowadays actually somewhat prioritize flatpaks. I can get recent versions, sandboxing and the configs and data are always in standard locations with predictable naming. They can be installed for user in home dir without root and are easy to move over in case of OS reinstalls.
Flatpak works pretty well. I try to prioritize my distribution's repositories but some software is not packaged. I've taken the easy way out and installed the flatpak. I guess I could go and package them, but I've been too lazy so far.
I think the fact that you try to prioritise the distros repos shows that it probably isn't quite ready. Presumably that's because you know that they'll work reliably but you aren't so sure about Flatpaks.
I can't speak for GP but the number one reason I prefer my distro's repos over flatpaks has nothing to do with Flatpak as a technology.
Most distros have a fantastic track record of defending the interests of their users. Meanwhile, individual app developers in aggregate have a pretty bad one; frequently screwing over their users for marginal gain/convenience. I don't want to spend a bunch of time and energy investigating the moral character of every developer of every piece of software I want to run, but I trust that my distro will have done an OK job of patching out blatantly user-hostile anti-features.
For Flatpak, I use vscodium to strip Microsoft telemetry out of vscode.
It works really well, the one downside is that vscode extensions are pretty intrusive. They expect system provided SDKs. So you then have to install the SDKs in the Flatpak container so you have them. If vscode extensions were reasonable and somewhat sandboxed that wouldn't be a concern.
All that is to say, Flatpak works well for this purpose too.
I recently installed Debian 13 and went with the default partition sizes for /, /var, swap etc. I had two flatpaks installed and my entire /var partition was filled up with 10gb of flatpak data. Frankly very bad default partition sizes and I should not have been so trusting, but flatpak is an unbearably hot mess.
Flatpak installs and shares runtimes. That's what makes it so stable, regardless of your distro.
So yes, if you install 1 KDE app from Flatpak, you will have the KDE runtime. But that is true if you install 1 KDE app while on Busybox as well. It's the subsequent KDE apps that will reuse the dependencies.
Which is often not the case. For those of us with slow internet connections, flatpack take hours to download programs that would otherwise take seconds.
That's the entire draw of Flatpak - I can have applications with out of sync libraries and they just work. That's a big big headache with system provided packages.
Absolutely. I should have verified partition sizing, and I should never have allowed even one flatpak. That doesn't make Debian default sizes and installation process anywhere close to good.
Because it isn't used for much? It's mostly just logs these days. Most data on most systems goes in /usr or /home. I would say the weird thing here is that Flatpak puts runtimes in /var by default instead of ~/.cache or something like that.
User-mode Flatpaks keep things in ~/.local/share/flatpak. This person simply installed a Flatpak in system-mode, which puts it somewhere other users could also run it (i.e. not your home directory).
Ask the Debian maintainers. That was their recommendation, and I trusted them - presuming they would recommend something that would work more than two weeks on a rather standard laptop installation. I will have to re-partition within the next year, because their / partition is too small as well.
I think this happens because the default option is “recommended for new users”. So some not-new users believe that the other options are better for them.
That default options reads like this:
“All files in one partition (recommended for new users)”
No, they make more than one recommendation - including which partitions to make and the sizes for each of them should you opt into their separate partition path in the installer. So they have defaults for multiple partitions and partition sizes - and I trusted them to have thought them through.
Two improvements that could be made: 1) Easy: put a brief Note in the installer indicating what might fill up the partitions quickly so people can have a heads-up, do a little research, and make a better decision. 2) moderate: still keep the Note, but also check the disk size and maybe ask which type of workload (server, development, home user), then propose something a bit more tailored.
I don't think that's the best conclusion: these days, disk is cheaper than it has ever been, and that "foundational" 8 GB will serve all the Flatpaks you want. Installing apps from packages sprays the same dependency shit all over your system; Flatpak was nice enough to contain it, so you immediately noticed it.
It's at best a mixed bag which makes it harder to fine-tune apps on your system and with limited security benefits (which again become harder to improve yourself).
When installing just two apps, even if both are in the same (KDE or GNOME) realm, you can very easily end up with 8 flatpaks (including runtimes) or more. This is due to a variety runtimes and their versions: One for KDE or GNOME Platform release (about two a year) plus a yearly freedesktop base) and not all apps being updated to the latest runtimes constantly.
You then have to add at least 6 locale flatpaks to these hypothetical 8 flatpaks.
Especially with Debian, locales matter, of you don't do a `sudo dpkg-reconfigure locales` and pick what you need before installing flatpaks on a default install, you will get everything and thus gigabytes of translations you don't even understand, wasting your disk space.
That was what was insane to me. I expected a couple hundred mb each for my first couple of apps. Not a pleasure in itself, but I was blindsided by the 10gb. The apps were clearly also part of the problem - they should not have so many dependencies. However even after I removed them, flatpak was using 8gb+, I had to purge it to reclaim space. That is why I called it a hot mess.
Yeah my one experience with installing things through flatpak is that it breaks them when it updates itself, upon which they can't launch until they're updated as well. And then for some reason errors out when trying to update them. Sigh.
Personally I'm interested in distros with an immutable base system. After decades of a lot of tinkering with all sorts of distros, I value a stable core more than anything else. If I want to tinker and/or install/compile packages I can do so in my $HOME folder.
In fact, this is what I've been doing in other distros, like Debian stable, nevertheless I have no real control of the few updates to the base system with side effects.
This is not the first immutable distro, but it comes from the people who develop my favourite desktop environment, so I'm tempted to give it a try. Especially as it looks more approachable than something like NixOS.
Aren't "immutable" distributions really just glorified "live CD's"? Not really seeing the point of them, tbh. It means that users will have to build a custom system image or fiddle with FS overlays just to do system management tasks that are straightforward on all other systems. The single interesting promise of "seamless" A/B updates is a vacuous one, that you could address more effectively on traditional systems by saving a temporary FS snapshot; this would even deduplicate shared data among the two installs, which is very hard to do with these self-contained images.
Lots of Linux users hate it, but as a one-time Linux user (about a decade as my main desktop OS) who now does 100% of important computer use on macOS or iOS, I find the division of “stable macOS base all the way through a working and feature-complete GUI desktop; homebrew (and the App Store) for user software; docker and language-specific package/env managers for development dependencies” to be basically perfect. Trying to use linuces where the base system and user packages are all managed together feels insane, now.
It depends a lot on the distro and how volatile it is and what tools are available.
I run Debian stable, and it's not immutable, but it is very unchanging. I don't worry much about system libraries and tooling.
The downside to that is that then userland application are out of date - in enters Flatpak. I run most GUI applications in flatpak. This has a lot of benefits. They're containerized, so they maintain their own libraries. They can be bleeding edge but I don't have to worry about it affecting system packages. I also get much simpler control - no fiddling with apparmor, the built-in Flatpak permission system is powerful enough.
The blind spot then is CLI apps and tooling. Usually it doesn't matter too much being bound to system packages, but if it really does, I can always containerize those too. I only do it for my PHP dev environment.
>The blind spot then is CLI apps and tooling. Usually it doesn't matter too much being bound to system packages, but if it really does, I can always containerize those too. I only do it for my PHP dev environment.
Do you encounter any friction getting different containerised tools to talk together? Can you compose them in the classical unix fashion?
I put them on the same container, basically. I bundle my PHP binaries, modules, and tooling all together and then use it on container. It defeats the purpose a bit, but it keeps the system clean. No NPM packages cluttering everything, no composer packages leaking, etc. Cross container, I'm sure, is more complex.
> I run Debian stable, and it's not immutable, but it is very unchanging. I don't worry much about system libraries and tooling.
I basically did the same with Tumbleweed for a couple of years. Can't stand the point release distros. Lagging behind a year or two when it comes to the DE is not something I fancy. Never liked Tumbleweed much though. Felt very unpolished when using Plasma.
> The blind spot then is CLI apps and tooling.
I can really recommend homebrew. Works well. apt is for the system, homebrew is for user facing CLI apps. :)
This. I always dreamed of a Debian / Ubuntu Distro where you could fully separate the "SYSTEM ENV" from the Userland ENV with userland referring to system ENV and then if userland ENV has a different, USerland takes presedence when its the user running the command, vs if its some automated service. I do not know if there's something like this for Linux outside of maybe containers? I guess that comes sorta close.
Apps are bundled and installed like they are on macOS, and there's a very strict distinction between literal 'System', 'Users' and 'Programs' directories.
> It means that users will have to build a custom system image or fiddle with FS overlays just to do system management tasks that are straightforward on all other systems.
What system management tasks? /etc and /var are usually writeable, which is all you need to configure the software on your system. Overlays are for installing new software on the base system, which is only really necessary for something like nvidia drivers because all other software is installable through other means (it's also usually a trivial process). Even if you don't want to use containers, you can use a separate package manager like Homebrew/Nix/Guix/Pacman/etc.
It requires a bit of a mental shift to adapt to if you're only used to traditional systems. It's kind of like the move from init scripts to systemd: it's objectively an improvement in all the ways that matter, but cultural/emotional push back is inevitable :)
I have been using Aurora DX for the last month, and it has been a good experience but has also required a shift in my thinking.
If anything is not included in the base image, you have a few options:
1. use distrobox to install it in a container, and export the app to the desktop.
2. use rpm-ostree to install it as a layer. This is on the slow side, and will slow down weekly updates.
3. Make your own base image with what you want included. This is probably cumbersome and requires some infrastructure.
I have a few things in distrobox containers, things which aren't available as flatpaks. The biggest hurdle, for me, was getting wireshark running since the flatpak version can't capture traffic. I had to make a root distrobox container and export the app to my desktop. It works, but there were definitely some hoops to jump through.
I like that updates come through once a week and they aren't applied until I reboot. If I have problems, it is easy to roll back to what I was running before.
I would be comfortable giving my parents an Aurora setup, knowing that they can't easily break it.
I use Bazzite, which ships with the homebrew package manager. Idk if wireshark is available on homebrew, but if it is then you'll be able to use it that way without having to deal with any issues related to containers. Nix is probably another option (you can use Nix as a package manager instead of a distro)
You could also build it from source, although that's definitely more work.
Immutable systems such as this one and Fedora's Atomics and CoreOS/Flatcar have their uses. Whether they make sense for you or for general desktop OSes is another question, but there are many situations where the approach makes a lot of sense.
Really, I don't see a lot of difference between immutable desktop OSes and Android or iOS. That model is not necessarily a bad one when you're rolling out systems that you don't expect the user to need to fiddle with the system management tasks you refer to. If I have 1,000 laptops to manage for a corporate environment, say, or for non-technical users who are not going to fiddle with drivers but might want to install Inkscape (or not).
Or you can try to install whatever custom packages you need under $HOME, without the need for any special permissions or FS overlays? But yes, saving snapshots is also a good solution.
I guess immutable distros such as this one target people who don't need much customisation and mostly just need what's already there anyway.
The advantage of immutable distro over custom OS snapshot is that everyone is booting off the same images. It makes support manageable because behaviors are much more likely to be reproducible. This is what stability is about, not just local system image.
End users should not have to do system management at that kind of low level. They should be able to focus on accomplishing what they actually want to do and not have to maintain the system themselves.
>you could address more effectively on traditional systems by saving a temporary FS snapshot
That's an implementation detail. Every modern OS uses essentially snapshots for A/B updates to avoid wasting storage space.
There's Arkane Linux which aims to be atomic as well, and the maintainer snapshots the packages every few days after testing. It's currently mainly managing / focusing on one DE but I could see it including KDE etc in the future if enough volunteers join in. I havent given it a shot yet, I quite love EndeavourOS as is.
I switched to Fedora Kinoite about two years ago and it's been a great experience. Updates are mostly invisible to me, I only layer a handful of packages (zsh, fzf, distrobox) and I do development inside of distrobox containers so I don't have weird build dependencies in my base system.
Desktop apps are all Flatpaks, including Steam.
Edit: This comment has been downvoted into the negatives? Did something change about HN culture?
I switched to https://getaurora.dev, also two years ago, and I'm not going back to a "normal" distro.
Can recommend Bazzite, Bluefin and Aurora which are derived from Atomic Fedora but come with niceties like distrobox and NVIDIA drivers (if you need them).
I tried Bazzite and it was absurdly, perceivably slower than Garuda on the same hardware. Either the immutable distro thing has way too much unavoidable overhead, or their Nvidia image is not tuned for desktop use.
Been using Bazzite and Project Bluefin on some refurb Dell 7200 2-in-1 I recently picked up and they both work great and really enjoying the experience. They are both part of Universal Blue
So you more or less want a BSD system. Have you tried them? They are a joy to use, can have far better performance than Linux, and have nice and predictable upgrade schedules. The base system is small and very usable out of the box. And documentation tends to be excellent.
In other words, with your requirements what are you still doing on Linux?
I've been curious about BSD in the past -- the thing that stops me is that I like to play with software that requires containers (docker) and I'm not sure if I'd ever get used to the difference between the core gnu cli utils.
The other thing that worries me is that I've had a lot of trouble building software that mainly supports BSD from source on linux machines. I'm worried if I switch to BSD, a lot of the software I want won't be available in the package manager, which would be fine, but I'm worried that building from source will also be a pain and binary releases for linux will not be compatible. Sounds like a lot of pain to me.
I'd be happy to be corrected if these are non-issues though.
You can install GNU coreutils out of the package repo no problem. Software packages are mostly available except closed source stuff that is Linux-only at which point you would use the Linux compat layer and it mostly works.
Docker is Linux-only for now but there is movement in that area. BSD had jails first and there is work on making generic containers work across multiple operating systems, including BSD. But I think the point of using BSD is to not bring Linux with you. Try their way and see how you like it.
A long time ago I have used all BSDs and loved them. Eventually the performance of Linux hooked me back, but I guess it's always a good time to go BSD again. I miss the predictability of the upgrades.
A few years ago I switched to KDE and the experience has been so absolutedly seamless and good, and the upgrade to Plasma 6 took some time to propagate down to distros it was well worth the wait!
It seems to be that a project like KDE might be in a very good position to make a very competitive distro simply because they are starting from the point of the user experience, the UI if you will. Think M$ windows, it IS GUI, and fully focused on how the user would use it (I'm thinking the days of XP and Win 7).
A KDE distro might be less encumbered with "X11 vs Wayland" or "flatpak vs <insert package manager name here>" discussions and can fully focus on the user experience that KDE/Plasma desktop brings!
That's exactly what's compelling to me as well. As an absolute fan of KDE and all its features, as well as stability. Who better to seamlessly integrate everything around a KDE desktop than themselves? KDE neon had potential as well, but I really like the notion of an immutable base system and less surprises during an upgrade.
this bit is a no-go for me. they've decided what goes in the immutable base os and allowed a set of kde apps citing subpar experience flatpak versions. I'm guessing they haven't tested all flatpak apps as they tested their apps.
"Well, we’re kind of cheating a bit here. A couple KDE apps are shipped as Flatpaks, and the rest you download using Discover will be Flatpack’d as well, but we do ship Dolphin, Konsole, Ark, Spectacle, Discover, Info Center, System Settings, and some other System-level apps on the base image, rather than as Flatpaks.
The truth is, Flatpak is currently a pretty poor technology for system-level apps that want deep integration with the base system. We tried Dolphin and Konsole as Flatpaks for a while, but the user experience was just terrible."
Nathan (who is a QA person with user-visible breakage ever-present on his mind) is talking about the alpha and the present-day situation, which naturally isn't set in stone. KDE is a Flatpak contributor. One of the little skunkworks projects within KDE Linux is even exploring further evolution of Flatpak that would allow putting Plasma itself into one, etc. This is an ongoing story, you shouldn't assume dogma.
KDE Ark is a graphical file compression/decompression utility. It's not system app and does not require deep integration with the base system. It's a bit strange choice of apps to include to system image.
Which is odd. Windows was able to browse ZIPs like normal folders since... 98? XP? Can't remember now.
IMHO KDE delegates too much core functionality to apps. On macOS, I can press "space" while having a file selected and I get an instant preview. This sort of thing must not be delegated.
Is this true? I was under the impression windows wasn't able to decompress zip files natively till very recently, like windows 11. I could be remembering wrong.
At the very least it does add context menu entries for compression to files, apart from "open with" obviously. That might already the reason right there.
That likely depends on the desktop environment. I have packages installed on my steam deck that add context menu entries, so clearly it's not impossible (my system still remains read-only, though I've been thinking about using an overlay like rwfus to get some new native packages, due to annoyance of self-management of self-built and downloaded ~/.local stuff)
> KDE Linux is Wayland-only; there is no X.org session and no plan to add one.
Does this mean they're testing that all the Wayland bugs are fixed? I haven't updated to the new Debian stable quite yet but all the previous times I've switch to Wayland under promises of "it's working now" I've been burned; hopefully dogfood helps.
The issue is that you are using Debian stable. Software quickly becomes out of date, sometimes by years, with the exception of security fixes and occasional backports.
Wayland, KDE, and several other pieces of software evolve rapidly. What may be broken in one release will very likely be fixed a few releases after the last debian stable release.
I'll run Debian on a server if I need predictability and stability with known issues. I won't run Debian on a desktop or workstation for the same reason.
I've tried distros with faster cadences. All that means is that I get an endless stream of new bugs, rather than a few that I can find workarounds for (such as just reverting to the still-good X11).
I worked with a guy who railed against the conservatism of our company's releases. He said "new software has more bug fixes." Then again, he was maybe a kind of hardcore software quality guy -- not the sort to add "features" to a piece of infrastructure that had demonstrated its worth.
The only issue I have with software conservatism, like Debian, is that some new thing requires something newer. If you live in a world where you can do without the new thing, then it's really quite nice. Security patches are another matter, but are usually dealt with.
I like to be on the bleeding edge, but Debian was created for a reason. Only time can determine which configurations don't suck.
I used to "hate" Wayland, but that was because I was stuck on an ancient kwin_wayland implementation that didn't get updated for years on Ubuntu.
When it comes to big changes like Wayland and Pipewire, you really want the latest versions you can get. Like the OP, I only use rolling releases on my machines for that reason.
Even as of Ubuntu 24.04 there's still plenty of stuff that's just broken. Can't stream my screen from Discord, can't share my screen from Firefox. Weird color problems with my calibrated monitor. Switching to Xorg solved all of these issues.
I'm open to moving to Debian testing/unstable if Wayland can actually deliver. What do you run?
My CachyOS / KDE install with pure Wayland has been buttery smooth and recently got an update that finally lets me calibrate the max brightness of my HDR OLED monitor (which was the monitor's fault. Not even Windows could make it work properly for non-games until now). CachyOS is also the first distro I've used in years that does things close enough to way I like out of the box that I haven't bothered to update my system reinstall script in months.
I've also been giving Bazzite to some non-tech people who have not once asked for help. That one is immutable and Wayland only, so it's a further testament to how far Wayland has come if you're on an up-to-date-enough system.
Sadly, I'm stuck on older Ubuntu for my work laptop because the mandated security software won't run on anything better.
I get that this is the current LTS release, but clearly this isn't want the parent poster had in mind. Notably 24.04 never shipped Plasma 6, which carried a lot of critical Wayland fixes.
Yeah, I wouldn't even bother with Wayland on Ubuntu unless it works out of the box.
I'm on an unholy amalgamation of Arch/Cachy/Endeavour now, but I have been using screen sharing nearly everyday on calls via Firefox on Arch for about a year and it's worked without a problem.
I considered Debian testing, and it does work well on servers, but a true rolling release is more convenient. The software release and update loop is shorter, it's really nice to be able to pull fixes to packages in a reasonable amount of time after they're released.
None of this is a problem on Debian stable. I even run discord as a Flatpak - screen share works fine. I believe there's systems for that now (way pipe? xdg stuff?)
Ubuntu 24.04 is older than Debian stable currently.
Regardless, that's still a huge Linux usability issue when the user needs to know for sure the specific source to install a friggin web browser where screen sharing works.
Indeed, though not so much Linux but rather a Ubuntu-specific issue. Most (all?) other distributions don't distribute Firefox as a Snap, so screen sharing will work out of the box.
Oh I absolutely agree. I've seen way too many people fall into this trap, install Firefox from Snap, Zoom client from Snap, what could possibly go wrong? Turns out, quite a lot!
I’m sitting with SW acceleration in the browser today because some update broke it. I have had it working in the past but I’ve had like 2-3 updates in the past 2 years break it.
And for what it’s worth there was a really bad tearing bug because of the argument over implicit and explicit synchronization that neither side wanted to fix. I think it’s since been addressed but that was only like in the past 6 months or something. So it’s definitely not been “years” since it’s been working seamlessly. Things break at a greater rate than X because X basically is frozen and isn’t getting updates.
That has been by experience. Browser seems fine for me.
With Arch, you have to read up ahead of time before updating software because it's a rolling release.
I remember one breaking change when I was switching from the previous Nvidia drivers to the new 'open' ones, but some breakge was expected with that change.
And yet i tried setting up manjaro to see what all the fuss was about with arch based systems. In less than ten minutes i understood the origin of all the krashes memes.
I've been running debian stable (with backports) as my desktop for a couple of years now, I find that KDE is updated enough, and wayland is stable enough (on my hardware, of course, a 13 year old macbook and a 8 year old NUC).. honestly, as a simple user, i haven't appreciated any difference between X and wayland sessions, so i just login into wayland.
While I appreciate all the folks singing our praises, as an upstream developer I think you deserve a better response than "you are holding it wrong" :)
We think that the Wayland session currently is the better choice for the majority of our users. It's more stable and polished, performs better on average, and has more features and enables more hardware.
That said there are still gaps in the experience, and non-latin input is one of them. In principle it's roughly on par witu X11, but it's still pretty crap, e.g. at the moment we give you a choice between having a virtual keyboard and having something like ibus active, when many users want both at the same time, and a lot of the non-latin setup stuff is still not part of the core product, which is very user-unfriendly.
The KDE Linux alpha in particular will definitely and unequivocably not serve you well as it currently doesn't ship any ibus/fcitx.
The good news is that this is something we are very actively working on in the background right now. We have an annual community-wide goal election process that made Input improvements one of our goals, and the topic has been all over our developer conference this year.
Another huge gap is accessibility. No wayland compositor has managed to implement screen reader support that works with existing applications yet. And no, GNOME's wayland compositor did not achieve this. In typical GNOME fashion they threw away all support for existing screen readers and accessibility and invented two entirely new GNOME only protocols that no software except theirs supports.
I'm in a similar boat - i tried the Wayland session in Debian 10 and 11 and lasted less than a day; in Debian 12 i toughed it out for about a week before hitting a showstopper; but this time in Debian 13 i've used it since release without a single nit to pick.
Has any distro ever promised that there are zero bugs in the software they use? I don’t particularly like Wayland but a lot of people have been using it for years at this point…
User adoption is not really a great metric when it ships as a default on common distros. Most people would rather deal with issues and wait for support than fix things in an unsupported way.
If it wasn't a default, it'd go back to barely being used.
Yep lol my experience on Windows 11 is that when opening a laptop, there's a realistic chance the taskbar will hang and have to restart itself (which takes a surprisingly long time)
This is my reality too and the taskbar takes some stuff down with it.
Also the taskbar is just broken in general. It'll pull tons of apps behind the '...' button even though there's plenty of room on the taskbar and it'll also put fake apps that aren't actually open on the taskbar.
I agree that it isn’t a great metric for, like, how good the desktop environments are in some overall sense. I’m just saying it has enough users that it isn’t some niche thing where a ton of bugs can easily hide.
I think "most" are fixed. I use quotes because I've seen people say they have issues that I have never run into myself.
I'm currently stuck on Windows for some old school .NET work, but otherwise have been running Wayland on either arch or fedora for 8 or so years, no real problems specific to Wayland. With that said, I've also always had X to fall back to for the odd program that absolutely only worked in an X session. At this point, though, I don't even recall what they were (probably something that didn't like running under Swaywm because wlroots), so even that might not be an issue.
When was the last time you tried Wayland? I switched to KDE Plasma a couple years ago not knowing anything about display server protocols and haven't had a single issue.
The last time I tried it extensively was on Debian Bookwork (12.1 and later; I always wait for the first point release), released July 2023 but freezing sometime around February 2023.
Yes, this was a while ago now. But just as now, people said then "all the bugs are fixed and missing features added"; all that really means is "we're in the long tail". I might've put up with it if not for the fact that there were 2ish major bugs that directly affected my main workflow (e.g. temporarily swapping to non-Latin text input).
Same experience. I switched back to Linux a few months ago after a few years hiatus. Installed Arch and KDE Plasma. Literally didn’t even know I was using Wayland until I had to fiddle with something and realized X wasn’t even installed
I'm using Wayland exclusively for 4 years now on Archlinux (may be more, I forget). At this point, it is better than X11. It still has bugs, but then so does X11.
Fractional scaling is fixed in Plasma 6 though. So, if you need that, it has been good for 1 year now.
i dont want to call linux old fashioned but to still be working the kinks out of windowing system in 2025 boggles me... its almost as if there's a resistance to GUIs or something.
The window manager in windows is horribly buggy and extremely slow. Lots of animation flickering and snapping for seemingly no reason. Try maximizing Firefox while a video is playing and watch the animation - or, usually, lack thereof.
Wayland is, by far, the best windowing system I've ever used. No dropped frames, ever. Its actually kind of uncanny, it feels like you're using an iPhone.
GUIs are tough in open source because they need way more than just code.
You need UX designers, testers, feedback loops, and infrastructure — stuff most volunteer projects can’t sustain. That’s why devs can dogfood CLI tools but GUIs are a whole different beast.
Wayland works great for me. I use a rolling update distribution so everything is the latest version and I only use Firefox, a terminal, and emacs. Debian tends to be pretty far behind.
Bugs in the window manager or shell (both shipped by KDE) are somewhat more common, but even if they are crashes, due to X11 being better-designed for isolated faults they are easily recovered-from without loss of session.
X11 not supporting modern display technologies is arguably a bug, and it's not likely to get resolved at this point (e.g. it can't do mixed DPIs, or VRR with multiple displays, or HDR in general).
I don't care about any of those things, since computers are about productivity for me.
But I'm pretty sure at least half of them actually do work under X11, it's just that some UI libraries refuse to use it on the grounds of "X11 is outdated, I won't support features even though it does".
(also, having played around with DPI stuff on Wayland, it's pretty broken there in practice)
Yep. I feel the same about all the various wayland compositors. Even 15 years on none of them have managed to implement accessibility support for existing linux applications. No screen readers support on any but GNOME's compositor and that doesn't work with existing applications; GNOME invented 2 new incompatible protocols that only their compositor works with (which doesn't work with existing applications).
No HDR or high DPI is an annoyance. Not supporting accessibility is real deal breaker. Especially for commercial settings where things like Americans with Disability Act compliance matters. And even more for me with my retinas slowly tearing apart and losing my eyesight: the entire waylands ecosystem is extremely inconsistent and buggy.
No. HDR will never come to X11. This is because the X protocol defines a pixel as a CARD32, that is an unsigned 32-bit integer. So the highest depth you could theoretically go is R10G10B10, and forget about floating-point HDR. Fixing this would require rewriting the entire protocol. Which has effectively already been done; it's called Wayland.
Perhaps people ought to listen to the Xorg devs when they say X11 is broken and obsolete, and you should be using Wayland instead. Every single one of them says this.
All sorts of things in X11 are "defined" as a particular thing in the base standard, then changed in protocol extensions. You really shouldn't be writing raw pixels anyway (and most people don't since breaks if your monitor is using 8-bit or 16-bit, for example).
X11 supports all sorts of obsolete pixel formats, including 1bpp mono, 4bpp and 8bpp indexed color, and 16bpp "high color" modes. In order to display an image in X11, you need to understand the pixel size, organization, and how colors are mapped to pixel values (all of which are available in a data structure called a visual).
> So the highest depth you could theoretically go is R10G10B10, and forget about floating-point HDR.
R10G10B10 matches most HDR displays out there, AFAIK even Windows uses that outside of DirectX (where FP16 is used).
But beyond that...
> Fixing this would require rewriting the entire protocol.
...most of the protocol does not have to do with color values. X11 is extensible and an extension can be used that allows alternative functions that use more advanced color values where that'd make sense. For example, assuming you want to use "full" range color values for the drawing functions like XFillPolygon, etc, you'd want to add have the extended range state in graphics contexts, introduce extended commands for changing it (with the existing commands simulating an SDR color for backwards compatibility). That is assuming R10G10B10 is not enough of course (though because for decades many applications assumed 8bit RGB, it is a good idea to do sRGB/SDR simulation for existing APIs and clients regardless of the real underlying mode of the monitor unless a client either opts-in to using extended color or uses the new APIs).
Another thing to keep in mind is that these are really needed if you want to use the draw primitives using extended color / HDR. However most HDR output, at least currently, is either done using some other API (e.g. Vulkan) or via raw pixel data. In which case you need to configure the window output (a window region, to allow for apps with mixed color spaces in a single window - e.g. think Firefox showing a SDR page with an HDR image) to use a specific color space/format and then rely on other APIs for the actual pixel data.
This is something i wanted to look into for a while now, unfortunately other stuff always end up having more priority - and well, my "HDR" monitor is only HDR in name, it barely looks any different when i try to enable HDR mode in KDE Plasma under Wayland for example :-P. I do plan on getting an HDR OLED monitor at some point though and since i do not plan on changing my X11-based environment, i might take a look at it in the future.
Again. This is a thing the xorg devs have already looked at. Their conclusion? "Nope. Too much work. Just use Wayland."
Once again, every... last... one of the Xorg devs is of the opinion that you should be using Wayland instead. Even if you had changes to propose to Xorg, they will not make it into a release. If you insist on soldiering on with X, your best bet is probably to contribute to Wayback, likely to be the only supported X11 display server in the near future, and see if you can add a protocol to the compositor to allow "overlay" of an HDR image displayed using Wayland "on top of" an X window that wants to do HDR.
I wish it was that easy to switch to Wayland, but I have always ran into serious issues. Granted it has been a year since I last tried, so who knows.
I use X11 features such as highlight to copy and then using middle mouse button and/or Shift-Insert to paste its contents (just to mention one), and I use xclip extensively to copy contents of files (and stdin) to it. I use scrot, I use many other applications specifically made for Xorg, and so forth. I have a custom Xorg config as well which may or may not work with Wayland.
Thus, I do not think I could realistically switch to Wayland.
> and I use xclip extensively to copy contents of files (and stdin) to it.
I won't say anything against your other points (and in fact I am typing this comment on Xorg because I have my own list of reasons), but https://github.com/bugaevc/wl-clipboard is almost drop-in for xclip/xsel.
> Again. This is a thing the xorg devs have already looked at. Their conclusion? "Nope. Too much work. Just use Wayland."
My comment isn't about how much work something would need, but about how it can be done.
> Once again, every... last... one of the Xorg devs is of the opinion that you should be using Wayland instead.
Good for them, but i have my own opinions.
> Even if you had changes to propose to Xorg, they will not make it into a release.
Maybe or maybe not. AFAICT the official stance has been that nobody wanted to work on these things, not that they are against it, they just do not want to do it themselves.
But if they do not make it into a release, there is also the XLibre fork or there might be other forks in the future, it isn't like Xorg is some sort of proprietary product. I'd rather stick with Xorg as it seems more professional but ultimately whatever works.
> see if you can add a protocol to the compositor to allow "overlay" of an HDR image displayed using Wayland "on top of" an X window that wants to do HDR.
TBH this sounds like an incredibly ugly and fragile hack. There are two main uses for HDR support: embedded HDR (e.g. in a firefox window) and fullscreen HDR (e.g. for videos/games). For the latter there is no point in an overlay, just give the server the full screen. For the former such an overlay will require awful workarounds when you want more than just a self-contained rectangle, e.g. you want it clipped (partially visible image) or need it to be mixed with the underlying contents (think of a non-square HDR shape that blends into SDR text beneath or wrapped around it).
From a pragmatic perspective the best approach would be to see how toolkits, etc, use HDR support in Wayland and implement something similar under X11/Xorg to make supporting both of them easy.
> But really, consider switching to Wayland.
I've been using Window Maker for decades and have no interest in something else. Honestly i think that adding Wayland support to Window Maker or making a Window Maker-like Wayland compositor are both more of an effort and harder than adding HDR support to Xorg. Also i am sometimes trying KDE Plasma Wayland for various things and i have several programs having small but annoying issues under Wayland.
That said, from a practical perspective, one can use both. The only use for HDR i can think of right now is games and videos and i can keep using my Xorg-based setup for everything while switching to another virtual terminal running KDE Plasma Wayland for games/videos that i want to see in HDR. Pressing Ctrl+Alt+Fn to switch virtual terminal isn't any different than pressing Win+n to switch virtual desktop.
Not really, that is OP's point. Xorg maintainers don't really want to enhance X11 and add new features, only critical bug fixes. That is one of the reason there are now X11 forks like Xlibre.
Ultimately it doesn't matter now, because Xorg is kind of in a state of "active abandonment", that is to say, the only maintenance being done is to ensure that no more bugs are being fixed aside from critical security issues on distros Red Hat still supports. In open source, you go where the developer energy is, and right now that's Wayland.
If you're about to tell me that XLibre is a viable alternative, no you're not because it isn't.
I've been using KDE since before Wayland was a twinkle in RedHat's eye, so trust me when I say that Wayland has always come across as an afterthought from KDE. I'm not saying it was, but given all the issues KDE users have had with Wayland over the years it sure looked that way. If somebody I loved was having trouble with KDE the first thing I'd ask is if they had accidentally switched to Wayland (usually because of an upgrade). The majority of the time they'd check, sigh, and say yes. Switching back their problems would go away.
Reading this thread makes me want to try KDE/Wayland again, so probably on my next install I'll give it another shot. If it's still crap I think it's time to switch off of KDE.
I recently switched to hyprland but before that I was running a mixed HDR/SDR, mixed VRR/no VRR, mixed refresh rate setup with icc profiles applied under KDE wayland. No issues here tbh.
I installed Debian 13 on my laptop from 2014. It's got an NVIDIA K1100M. The latest proprietary driver supporting it is 390 which is not supported by Debian 13. It was by Debian 11. I skipped 12. I run Nouveau and Wayland and everything that didn't work with Wayland on Debian 11 works now, with one unfortunate exception: backlight control is broken, which means that I'm stuck with 100% brightness. That's probably a problem with the kernel or the driver because it happens with X11 too.
X11 has a workaround for that because I can use gamma correction to simulate brightness control and make it work with night light. There was no way to do it in Wayland: they stomp on each other and undo whatever the other software did. So I'm back to X11 and frankly I don't notice any difference.
If you have more luck with your graphic card you'll be probably OK with Wayland. Anyway the X11 session is there, logout from Wayland and login using X11.
I'm still dual booting. Debian 11 to work and Debian 13 to finish setting up everything.
With Debian 11, kernel 5.10.0-35-amd64
I was sure that I was using the NVIDIA driver 390 but I run dpkg -l before replying to you and I found out that actually I'm running the 470.256.02 driver. I definitely run the NVIDIA card because NVIDIA X Server Settings is telling me that X Screen 0 is on "Quadro K1100M (GPU 0)". I see it also in /var/log/messages and
cpuinfo reports that my CPU is an i7-4700MQ CPU @ 2.40GHz which according to the Intel web site has an internal Intel® HD Graphics 4600. I think that I never used it. NVIDIA X Server Settings does not report it but it's a NVIDIA program so I would not be surprised if it does not see it. Anyway, the kernel module for Intel should be i915 and it's not there. Maybe I have to load it but I'm phasing out this version of the OS. I'm pretty sure I never installed anything to switch between the two GPUs. There used to be something called bumblebee. Is that what you are using now?
Apparently I can install the 470 driver in Debian 13 https://forums.debian.net/viewtopic.php?t=163756 but it's from the unstable distribution and if Nouveau works I'm fine with that. I'm afraid that the NVIDIA driver and Wayland won't mix well even on 13 so I'll be on X11 anyway.
Very interesting. Thanks a lot for this! I will experiment and see if I can get it working.
I use older Thinkpads with Optimus switching, so using the Intel GPU is not opotional: it is always on, but the OS offloads GPU-intensive stuff to the nVidia GPU.
In my testing with Debian 12, I could not get my nVidia chips recognised at all. In some distros, this has the side-effect of disabling the Displayport output, which is a deal-breaker as I use an external monitor and the machines do not have HDMI.
I just pointing this out in the other comment. So no, bugs are still there, plenty of system level graphics glitches in all but most trivial circumstances.
I hope it also means they've managed to do what no wayland compositor has managed in the past 15 years: have working accessibility (screen reader support, etc) that works with existing applications. Otherwise this is just another toy/demo distro.
And no, gnome's wayland compositor did not achieve it either. They threw away all accessibility support and then invented two new gnome-only protocols for it that no software except gnomes own compositor supports.
Works fine in current KDE master branch, and it's been working for quite a while so it should be in the current release. Note that I run Teams in MS Edge for Linux, which is my dedicated Teams runtime environment and sandbox.
I wish them the best of luck. I never used Neon since it was a rolling release distro. This one I also won't be using because it immutable and relies on Flatpaks which are very buggy. Standalone binaries or AppImages are fine with me but Flatpaks and Snaps are garbage.
Not only is Arch also a rolling distro (despite them saying "not Arch!"), Arch is one of the most horrible rolling distros in terms of stability. Their general principle for package breakage is "you should have checked it on our (site) release log". They don't throw an error or a warning, if something is a breaking change and you pull it into your system, you basically get a "hehe should have checked the release log", and you're hosed.
If you want a good, actually professional rolling release, use SUSE Tumbleweed. They test packages more thoroughly, and they actually hold back breaking or buggy changes instead of the "lol read log and get fucked" policy.
This is a misunderstanding from users POV. Something that a lot of people have.
Arch is a DO IT YOURSELF distro. They write that thing everywhere they can. The stability of the installation is literally ON YOU! Your responsibility as a DO IT YOURSELF distro user. They didn't trick you into it or something.
Expecting Arch linux to spoon feed is like expecting IKEA to give you assembled furniture.
You should use openSUSE or other "managed" rolling release distros. Arch IS NOT A "managed" rolling release distro.
My installation is now 6 years old. Never had any point release distros that long. Stability is subjective to hardware for starters. And secondly, Arch is a DIY. Do not use it if you can't get it to work for your use cases. We have 300+ distros to choose from. I am just politely telling you that your expectation of you wanting Arch to take care of your installation was never a promise from the project.
It's software. It will work the way it is written. As simple as that.
That doesn't follow; DIY is a spectrum. It can be perfectly reasonable for a DIY distro to ship a package manager, just as it can be reasonable for it to run on existing hardware instead of expecting you to break out the soldering iron.
Anecdote: 12 years with Arch, including a laptop with 9 years on one install. Zero issues. But yeah, there’s a low volume mailing list. Get on it. Read it, it’s very short and to the point, and it’s only a few times per year.
Very uncharitable perspective on people that do the work for free. I can understand not wanting to use a distribution where breakages can happen, but being a dick about it less so.
> Arch is one of the most horrible rolling distros
We've had different experiences. I've been using Arch for about 8 years and have had to scour the forums no more than thrice to find the magic incantations to fix a broken package manager. In all cases, the system was saved without a reinstall. However, it is certainly painful when pacman breaks.
Thats a very different experience from me. I've had quite a few broken packages easily over 10 in the last year and a half. It was easy enough to find them and roll them back but I dont know how people can say arch is stable. Do you update regularly?
I don't want to manually have to scroll through all the release logs on every single upgrade, in case their might be a landmine in there this time. Nor does any rational person that values their time or their system stability.
It is a million times more sane to have a package manager throw a warning or an error when a breaking change is about to be applied, rather than just YOLO the breaking change and pray people read the release log.
It is one of the most stupid policies ever, and the main reason why I will steer everyone away from Arch forever. Once bitten, twice shy.
I've been using Arch Linux for over a decade and have literally never once consulted release logs, and never got into any serious trouble.
I do subscribe to the arch-announce mailing list which warns of breaking changes, but that receives around 10 messages per year, and the vast majority aren't actually all that important.
I've also gone multiple months between updates and didn't have any problems there either.
The idea that Arch Linux breaks all the time is just complete nonsense.
That’s three times too many. I have been running an Ubuntu server at home for 10 years and went through probably 4 LTS releases and the number of times apt flaked out on me - exactly zero.
I'm running Ubuntu 24.10 and they broke the upgrade to 25.04 if you're using ZFS on the boot drive. Their solution was to prevent the upgrade from running, and basically leave behind anyone stuck on 24.10 to figure it out for themselves.
If they weren't going to support the feature why did they provide it as an option on the installer without any warnings or disclaimers? This isn't some bespoke feature that I hacked together, it's part of the official installer. If I had known it wasn't fully supported then I wouldn't have used it.
Maybe by Manjaro's maintainers, but certainly not by Arch's. I've been using Arch for a little over a decade. The position that I've always seen in the official IRC channel is that forks such as Manjaro are explicitly not Arch.
Two years with no uptes on rolling release is not a good idea. Two years with no updates for anything not connected to the internet is not a good idea.
I didn't say anything about the machine being on the internet persistently. It's a laptop sitting in storage mostly. The updates are for when it comes out of hiding.
I guess my only option is to switch to a more stable distro such as Debian or SUSE. Manjaro has always been touted as a very light distro, good for old machines, but its instability makes it a no-go.
> SUSE Tumbleweed
> They test packages more thoroughly, and they actually hold back breaking or buggy changes instead of the "lol read log and get fucked" policy.
I am currently on Arch specifically because Tumbleweed shipped a broken Firefox build and refused to ship the fixed version for a month.
As a workaround I uninstalled the bundled firefox and replaced it with flatpak. And on next system update the bundled Firefox was back because for some strange reason packages on suse are bundled.
There has been an increasing trend in the use of up votes as likes instead of user moderation which results in worthwhile discussion sinking to the bottom and stuff like this being at the top and setting the general tone of the discussion.
Without being too negative, I'd like to point out that Neon, ElementaryOS etc tried the same thing. A project thinks we need our own distro but ends up pulling resources away from improving the desktop environment itself.
GNOME doesn’t maintain Ubuntu or Fedora, but it still dominates the Linux desktop experience.
Gnome has its own distribution called Gnome OS. It’s based on Fedora Rawhide.
It actually looks a lot what KDE is shipping here except Gnome provides it as a reference system for their developers at the moment but it’s totally usable as a user if you want to.
> It actually looks a lot what KDE is shipping here
No, it does not, in any way whatsoever.
GNOME OS does not have dual root partitions, Btrfs with snapshots and rollback, atomic OS updates, or any of the other resilience features which are the unique selling points of KDE Linux.
In case you are unfamiliar with the design of KDE Linux, I have described it in some depth:
I do not personally use GNOME or GNOME Boxes and I've never managed to get GNOME OS to so much as boot successfully in a hypervisor or on bare metal, and I've tried many times.
But I don't think it adopts all these fancy features yet.
It was a relatively recent change [1]. Try the latest Gnome OS nightly ISO in a VM -- you'll see that they've (largely) implemented the partition scheme suggested in ParticleOS: root on btrfs, two partitions for /usr backed by dm-verity, new /usr images delivered using "systemd-sysupdate".
Yes, the key difference is GNOME has strong downstream partners that treat it as the default (e.g. Fedora Workstation, Ubuntu). This way GNOME gets a lot of testing, polish, and feedback without having to maintain its own dist.
I guess I'm confused on what the difference between "being the most popular Linux DE" and "being the default DE of the most popular Linux distros" is. Other than "already being most popular", what was/is KDE's partnership with these distros lacking that GNOME wasn't/isn't? Since this all happened 10-20 years prior to either Neon or KDE Linux, and KDE has long had these kinds of partnerships, I'm assuming there is some other reason/thing KDE you think KDE should be looking at.
Adding on from this new comment: Given whatever differences you see for GNOME in the above, why do you think GNOME has maintained its own testing OS for the last 5 years despite this?
> I guess I'm confused on what the difference between "being the most popular Linux DE" and "being the default DE of the most popular Linux distros" is.
You put the things in quotation marks but I do not see these phrases in the thing to which you're commenting.
KDE is roughly a year older than GNOME.
Snag: KDE was built in C++ using the semi-proprietary (dual-licensed) Qt. Red Hat refused to bundle Qt. Instead, it was a primary sponsor of GNOME, written in plain old C not C++ and using the GIMP's Gtk instead of Qt.
This fostered the development of Mandrake: Red Hat Linux with built in KDE.
In the late 1990s and the noughties, KDE was the default desktop of most leading Linux distros: SUSE Linux Pro, Mandrake, Corel LinuxOS, Caldera OpenLinux, etc. Most of them cost money.
In 2003, Novell bought SUSE and GNOME developer Ximian and merged them, and SUSE started to become a GNOME distro.
Then in 2004 along came Ubuntu: an easy desktop distro that was entirely free of charge. It came with GNOME 2.
Around the same time, Red Hat discontinued its free Red Hat Linux and replaced it with the paid-for Red Hat Enterprise Linux and the free, unsupported Fedora Core. Fedora also used GNOME 2.
GNOME became the default desktop of most Linuxes. Ubuntu, SUSE, Fedora, RHEL, CentOS, Debian, even OpenSolaris, you got GNOME, possibly unless you asked for something else.
KDE became an alternative choice. It still is. A bunch of smaller community distros default to KDE, including PC LinuxOS, OpenMandriva, Mageia... but the bigger players all default to GNOME.
Many of the developers of GNOME still work for Red Hat today, over 25 years on. They are on the same teams as the developers of RHEL and Fedora. This is a good reason for GNOME OS to use a Fedora basis.
> Around the same time, Red Hat discontinued its free Red Hat Linux and replaced it with the paid-for Red Hat Enterprise Linux and the free, unsupported Fedora Core.
This is a common misconception. RHEL and RHL co-existed for a bit. The first two releases of RHEL (2.1 and 3) were based on RHL releases (7.2 and 9). What was going to be RHL 10 was rebranded and released as Fedora Core 1. Subsequent RHEL releases were then based on Fedora Core, and later Fedora.
IMHO a summary a few paragraphs long of a decade of events in a complex industry must simplify matters.
Sure, there was overlap. Lots of overlap. You highlight one. Novell bought SUSE, but that was after Cambridge Technology Partners (IIRC) bought Novell, and after that, then Attachmate bought the result...
But you skip over that.
I think as a compressed timeline summary, mine was fair enough.
It is really important historical contact that KDE is the reason that both Mandrake and GNOME exist, and it's rarely mentioned now. Mandrake became Mandriva then died, but the distros live on and PC LinuxOS in particular shows how things should have gone if there was less Not-Invented-Here Syndrome.
I don't think "well, actually, this happened before that" is as important, TBH.
> You put the things in quotation marks but I do not see these phrases in the thing to which you're commenting.
Quotes are overloaded in that they are used for more than direct citation. In this case: to separate the "phrase" from "the sentence talking about it" (aka mention distinction - as used here as well). "s are also seen in aliases, scare quotes, highlighting of jargon, separating internal monologue, and more. If it doesn't seem to be a citation it probably wasn't meant to be one. On HN, ">" seems to be the most common way to signal a literal citation of something said.
This is a fair enough, even more detailed, summary of the history, but I'm still at a loss for stitching this history to what KDE should be doing today. Similarly, for why this relationship results in good reasons for GNOME OS to exist but KDE Linux? E.g. are you saying KDE Linux should have been based on something like openSUSE (Plasma is the default there) instead of Arch, that they should have stuck to several more decades of not having a testing distro, or that they should do something completely different instead?
I don't use GNOME or KDE as my DE, so I genuinely don't know what GNOME might be doing that KDE should be doing instead (and vice versa) all that deeply. The history is good, but it's hard for me to weed out what should be applying from it today.
Or maybe I completely read to far into it and it was only a statement that GNOME has historically been more successful than KDE. It's known to happen to me :D.
2. KDE used to enjoy significant corporate backing.
3. Because of some companies' actions, mergers and acquisitions, etc., other products gained ascendancy.
4. KDE is still widely used but no longer enjoys strong corporate backing.
5. Therefore KDE is going it alone and trying something technologically innovative with its showcase distro, because the existing distro vendors are not.
The KDE Linux section of this recent article of mine spells out my position more clearly:
Does immutability mean something like ChromeOS, where you cannot install packages on the system itself, but you can create containers on which you can freely install software, including GUI?
If yes, what are some good options for someone looking for a replacement to ChromeOS Flex on an old but decent laptop?
To add something useful, OSes are the one area where reinventing the wheel leads to a lot of innovation.
It's a complete strip down and an opportunity to change or do things that previously had a lot of friction due to the amount of change that would occur.
What makes you say "the one area"? There are plenty of areas that have enough development friction / inertia such that the same principle applies. Even generally, I think the reason why people caution against reinventing the wheel isn't because it prevents innovation, but because it wastes time / incurs additional risk.
I agree with you. When I read that my first thought was "the one area"? Personally I think its the complete opposite, like really strongly. like really really strongly. I'm certain for at least 10 years now, once a week I think "I miss old desktop operating systems". Any of them. 7,vista,xp. snow leopard,leopard,tiger. I even stopped using Ubuntu when it went from Gnome 2 to Gnome 3 and other options at that time were pretty bad so I ended up getting back into mac's for my home desktop. I still use all 3 daily, but hate all of them.
Even desktop environment is not solved. I'm typing this from a relatively new metod of displaying windows - a scrolling window manager (e.g. Karousel [1] for KDE). It just piles new windows to the right and it infinitely scrolls horizontally. This seems like a minor feature but changes how you use the desktop entirely and required a lot of new features at operating system level to enable this. I wouldn't go back to a desktop without this.
The immutable systems like NixOS [2] have been an absolute game changer as well. Some parts are harder but having an ability to always roll back and the safety of immutability really make your professional environment so much easier to maintain and understand. No more secrets, not more "I set something for one project at system level and now years later I forgot and now something doesn't work".
I've been on linux desktop exclusively for almost 15 years now and it has never been as much fun as it is today!
I've long wanted a scrollable/zoomable desktop, with a minimap that shows the overall layout. Think the UI of an RTS game, where instead of units you move around and resize windows. This seems like something in that direction, at least.
How does Karousel work with full screen applications, e.g., games?
Karousel knows when application wants to be fullscreen and allows it to take the screen. If you use hotkey for "move focus to left/right window" you can even exit fullscreen to see other programs. You can also force any program to fullscreen with a key. This is a pretty good workflow as you can fullscreen something and still keep the layout, just not visibly.
Am I the only one who thinks that DBus and XDG are causing a lot of problems?
I would love to see a complete overhaul of those.
In my opinion, if I type "xeyes" and it works (the app shows on my screen), then I should be able to start any other X11 application. However, gnome-terminal behaves differently. I don't know why precisely, but using dbus-launch sometimes works. It is a very annoying issue. A modern Linux desktop system feels like it's microservices connected by duct-tape, and sometimes it works, and sometimes it doesn't.
On the DE, we just struggle with polish. This is paradoxically both an issue of not enough fruitful innovation and not enough maturity of good innovations that happen and take forever to be adopted.
As far as the actual OS, the new sheaves and barns thing in Linux is neat. We need innovation in RAM compression and swapping to handle bursty desktop memory needs better.
The main problem, and the one I'm trying to solve, is that as a software engineer, you have little incentive to make something that millions of people will use on the Linux desktop unless you have some other downstream monetization plan. You will have tons of users who have questions, not code contributions. To enable users to better organize into their own support structures and to make non-code contributions, I'm building PrizeForge.
It's more a matter of best practices than technical details.
You can build a skyscraper on top of the foundations of a shed, and the kernel devs have done an amazing job at that, but at some point you gotta conclude that maybe it is better to start from scratch with a new design. And security is a good enough reason.
If I'm able to do everything I can in my regular arch Linux installation, it would be nice to try an arch derivation that is immutable by design.
What I'm affraid is to start experimenting and finding more and more that my workflow is hindered either by some software not working because the architecture of the OS is incompatible, or by KDE UX design choices in the user interface.
That's not to say that it wouldn't be interesting, and it would say nothing about the quality of the software if I'd hit such walls, only that I'm not its target audience.
I find that I really like using an immutable distro with a custom image (built with github actions).
So I can really separate the system-level changes (in the image, version-controlled) from my user changes.
It's a nixos-like experience without using nix at all.
There have been a couple of things to have in mind, with my Bazzite installation, for creating users or adding groups for example, this pointed me to use systemd-sysusers but it was simple.
I've been wanting to do this! The plan was to modify the Bazzite DX version build script, but ultimately Fedora being base was a deal breaker for me. With KDE Linux this might finally be a dream come true.
It seems like KDE linux uses a different way to provide a system image than ostree on Fedora Silverblue, so I have no idea how easy it is to make changes on top of.
But for Bazzite (and other universal blue distros) you better use BlueBuild
In the end it's an OCI container image, so you could technically just have a Dockerfile with "FROM bazzite:whatever" at the top, but bluebuild automates the small stuff that you need to do on top of it, and allows you to split your config in files.
You can have a look at my repository to see how easy it is !
Yeah... At this point I would give into Nix for managing the underlying arch system. It's not a gentle learning curve I believe, but at least the community is strong around nix
That's what I use too on Bazzite, custom image for system level stuff, and home-manager for user-level stuff.
The nice thing about Fedora Silverblue's model is that it is literally a container image, so to "build" your image you can run any arbitrary commands, so it's way simpler than nix.
For snapshots and rollbacks, my backup strategy is enough with Borg. I also take an hourly inventory of the installed packages, so if I need I can go back a maximum of 7 days and a minimum of 2 and see what changed. It's usually enough.
Backups: my file is gone, overwritten, corrupted, I accidentally deleted contents I want... but my computer is working, so I will retrieve a copy from my backup.
Atomic updates: aargh, my computer was half way through installing 150MB of updates across 42 packages, but one file was bad and the update failed, so I rebooted and now it won't boot! No problem, reboot to the boot menu, choose the previous snapshot, a known-good config, and you can boot up and get back to work, until the update is available again.
Thanks for the suggestion. I find it very discouraging to experiment with sparsely documented projects, it feels like you are unwelcome in such projects.
I'm not a Linux user (yet) and I'd like to understand what "immutable" means here. Does it mean that I can't, eg, install Elixir or an IDE on it? I have absolutely no interest in deeply tuning the OS, which is why I'm interested here - I've been on Windows for decades for a reason. But if installing applications is blocked, or cumbersome, then who is this for?
It means the base system doesn't support individual package updates. Similar to a docker image, upgrading to the next version requires a complete base-image upgrade. In general, it shouldn't affect your ability to add additional software on top, but it may impact how you do so (e.g. Fedora Silverblue only allows Flatpak containers on top of the base OS).
Immutable here just means there is a base OS+libs that you don't touch. So now elixir or an ide would install in a sandbox with any needed libraries not included in that base instead of install all the libraries and stuff globally
if mix can work without sudo|root it will absolutely work in an "immutable distro" on the other hand: this particular immutable distro may not have all the c libraries BEAM/Elixir expect in that base and while SilverBlue does let you add to the base this doesn't sound like it will. So it might take some effort, hard to say at this point, though you can always add to your PATH
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'" Pacman is not included, and Arch is used only for the base operating system. Everything else, he said, is either compiled from source using KDE Builder or installed using Flatpak.
Funny; sounds more like a BSD (a prebuilt single-artifact Arch "base system" + KDE Builder-based "ports collection") than a Linux.
I agree with you in principle. However, this distro in particular makes use of an immutable base system, which although not new, is definitely not extremely common among Linux distros.
Immutable distros today feel like someone read a CNCF "best of" publication and decided to throw it at desktop Linux to see what sticks. Not everyone wants to be a DevOps engineer.
I think the concept has promise (see: ChromeOS) but the execution today is still way too rough.
Note that it's not necessarily an "Arch distribution" in the sense you might expect:
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'" Pacman is not included, and Arch is used only for the base operating system. Everything else, he said, is either compiled from source using KDE Builder or installed using Flatpak.
This is where I've been for the last 7 years. Very happy with it. I'm looking forward to an Arc Pro machine with SR-IOV GPU capability for VMs. That is pretty much my dream desktop, as much as I care to have one.
The premise "we write software which is installed on operating systems, so we need our own operating system as well" doesn't make sense. Also the point that there are other operating systems like elementary or gnome OS out there is a moot point.
At least for elementary OS i kind of get the promise of some high quality user experience focused MacOS competitor.. But KDE OS? Why should I not just install KDE on my distro?
This distro doesn't seem to be born out of some real need for non-KDE-developers? Maybe it should be just some playground for KDE devs to test drive new tech?
> This distro doesn't seem to be born out of some real need for non-KDE-developers?
It's born out of a few things:
a) KDE as a community has increasingly focused on good and direct relations to end-users of late, which e.g. has resulted in most of the funding now coming from individual donors. Wanting to make more of their computer-using experience better isn't a strange impulse to have.
b) The community has hardware partners (e.g. laptop makers) that want to collaborate on pre-installing something with a KDE-focused out of the box and user experience. That has so far been Neon, which has a number of engineering and stability issues that have been difficult to overcome. KDE Linux is an attempt to improve on that.
c) It's also generally driven by a lot of the lessons learned from the SteamOS and Neon projects, and is attempting a lot of new solutions to risk-free updates and hackability, oob experience, and down the road likely also backups. The team does think there is a value prop to the distro as such beyond the KDE GUI.
d) The developer audience isn't unimportant either. More KDE developers on an immutable+sandboxed apps distro will mean more eyeballs on e.g. Flatpak problems, improving that for everyone else. Many recent new distros that ship Plasma by default (e.g. SteamOS, Bazzite, CachyOS, etc.) benefit.
a) I get that a lot of users use KDE. And they love the Desktop Environment. But is there demand for an OS? Would those users switch? I hope so, but for such a big decision to build, support, maintain a whole OS, i'd expect some kind of poll maybe? Some input saying "30% of KDE users would switch to KDE OS"?
Is there some kind of proof?
I've been using Gnome OS for years but never felt i would want to switch to some Gnome OS. The Desktop Environment is one of many tools in my distro (for me, at least).
b) Supporting lots of hardware (expecially Laptops!) seems to be a huge time sink for people not primarly involved in kernel/driver stuff, or not?
c) ok..
d) Same as a): Will all KDE devs use KDE OS? And is it good to have the KDE Devs use KDE OS, when the majority of users use Arch/Debian/Ubuntu/Fedora? I'd rather have a good chunk of those devs use my distro...
I love using KDE and use it on all my desktop machines. I even have a source compiled version ready to test / hack on if I need - utterly fun and easy to build using kde-builder and works on most distros including Ubuntu/Debian, Arch and Fedora.
That said, I don't think having yet another immutable distro is a great idea if they are only going to punt and use Flatpaks. They can run flatpaks on any distro out there. So not really understanding the idea behind this. Nothing really stands out from the article - they still need to make KDE work great with most other modern versions of the distros so it isn't like Flatpaks based KDE is going to give them an edge in having the best KDE on their own distro.
I could have sworn they have had this for a while... Nice that it is Arch Based, I wonder if they bothered to look at Arkane Linux which is also atomic, and the maintainer has all his scripts on how to do it available for anyone to make their own spin. I feel like it could have been beneficial for both KDE and the maintainer of Arkane Linux to work together.
> Unlike Fedora's image-based Atomic Desktops, KDE Linux does not supply a way for users to add packages to the base system. So, for example, users have no way to add packages with additional kernel modules.
But then, since / is rw and only /usr is read-only, it should be possible to install additional kernel modules, just not ones that live in /usr - unless /lib is symlinked to /usr/lib, as happens in a lot of distros these days.
Well, as long as they're either updating frequently or you're not using nvidia drivers (which are notoriously unpleasant with Wayland) I guess it's fine for a lot of people.
KDE made me fall in love with Linux. The familiar UI to Windows, the insane customizability, the snappiness - each and every one of their contributors are legendary.
Will this help KDE-Plasma finally move from pre-alpha more towards something that can be used daily, or will we still need another decade or two ?
Asking this as a user who really would love to move away from X11, but everytime I try anything Wayland related it's just alpha or pre-alpha, endless graphics glitches, windows going black or flickering, (double the glitches after turning display off/on),multiple rendering issues with Firefox, Clion etc..
I think I'm mentally preparing to use X11 until retirement....
The thing is the first 90% of software is the easy part. Once you've done that you still need to do the other 90%. And the latter 90% is what separates little hobbyist weekend projects from products. It's a relentless boring grind of testing, fixing bugs and sharp edges and adding workarounds.
A week ago, KDE-Plasma 6 whatever is the latest on Arch.
Using NVIDIA proprietary.. glitching like MOFO. Looks slick but just way too buggy to be used.
Some things to try:
* Try turning your display on/off
* Try using several virtual displays and spread graphics apps on each one (I use 4 normally)
* Try opening 20 firefox windows with ~50 tabs each
* Try opening a 8k png in firefox tab (or in some other image viewer)
So yeah... pre-alpha.
P.s. I also tried XFCE and Enlightenment.. and those are not any better (not that claim to be anything but pre-alpha).
Honestly.. on Windows11 the experience is just so damn smooth and slick. Nothing glitches or hangs. The Linux graphics stack just lags behind decade after decade... never catches up...
Ah, I haven't tried it on NVIDIA drivers in a while.
I'm doing a reinstall on my gaming PC soon, so I'll give it a shot then. I've been using it on Intel and AMD systems, and haven't had issues. But you know, they actually have drivers that are designed for the modern linux graphics stack.
> P.s. I also tried XFCE and Enlightenment.. and those are not any better (not that claim to be anything but pre-alpha).
So... maybe the NVIDIA drivers then? And not KDE Plasma?
> The Linux graphics stack just lags behind decade after decade... never catches up...
Come on, you can't really blame NVIDIA's dogshit drivers that refuse to integrate into the rest of the stack on the KDE devs.
No, what I mean with XFCE and Englightenment is that they admit being alpha.
Yeah, well the reality is that NVIDIA drivers are the drivers one wants to use on NVIDIA hardware (which many of us have.
And somehow they work fine on X11.
It's always nice to blame the driver vendor, but what has the Linux community the kernel team, the graphics team done to promote Linux and make it simple to write correct performant drivers for the platform? How many graphics memory allocations are there? How many buffer sharing APIs, are the kernel driver interfaces stable?
I've been using KDE-Plasma on Wayland (Debian 13) since release as a daily driver, and I'm happy to report that it is super stable, has no problems with waking up from suspend and hibernate, and is a superb all-around shredder. I didn't notice any glitches, or flickering, or bugs so far, despite intensive daily abuse.
For me it is natural that since the desktop environment is the most important part of the desktop operating system, it should have its own distribution.
The apps do not install or update any included libraries in the base image of the OS. It may rely on an specific or minimum version of the OS but that's it. Everything the software needs is installed into its own sandbox, and other applications cannot share it.
We should go back to static linking. With CI/CD generating new packages is trivially easy.
Then we can throw out all these fancy packaging tools like Snap and Flatpak, all the fancy half-done COW filesystems like Btrfs, all the fancy hidden-delta-synching stuff like OStree, and just ship one honking great binary in a single file that works on anything, no matter what the libc so it even works on musl or whatever.
I wouldn't say they are reinventing the wheel. Putting a new set of rims on them, maybe...
"KDE Linux is an “immutable base OS” Linux distro created using Arch Linux packages, but it should not be considered an “Arch-based distro”; Arch is simply a means to an end, and KDE Linux doesn’t even ship with the pacman package manager."
CatchyOS is great, been using it for months and been good overall. There is also garuda linux, it looks great too, only tested it for a little though, worth trying if you are in your distro-hopping phase: https://garudalinux.org
> Neon has ""served admirably for a decade"", he said, but it ""has somewhat reached its limit in terms of what we can do with it"" because of its Ubuntu base. According to the wiki page, neon's Ubuntu LTS base is built on old technology and requires ""a lot of packaging busywork"". It also becomes less stable as time goes on, ""because it needs to be tinkered with to get Plasma to build on it, breaking the LTS promise"".
I run KDE Neon and this checks out. However, it's a terrible idea to create another distro. The Linux world needs more cohesion. I might go back to Ubuntu if their KDE 6 is decent now. I use the DE for the purpose of running programs in the environment and that includes being able to easily set up things like CUDA, which is easiest with Ubuntu, a PITA with other options.
What do you mean by cohesion? I feel like cohesion with Linux would mean cohesion on Desktop Environment (which we have with GNOME and KDE being so popular) and packaging format (which we have with Flatpack)
I mean "Linux" (as a user OS) should for the most part be a common experience, so if a very technical or very non technical person wants to do something or someone wants to support someone doing ordinary things using a different distro, they aren't on a whole new arbitrary mini-adventure full of surprises, except at the desktop level.
Mind you even though I've been running Linux for decades, I have lost the enthusiasm for the low level details and am happiest when I can use apt for everything and have the OS manage dependencies and updates. I see a lot of negative comments about Flatpack and my experiences haven't been great, so I don't know if it is comprehensively good and will solve issues like low level drivers (GPUs).
The big one: a different combination of packages, i.e. which versions are available, and how they're configured and integrated. This generally also means they will have different package managers and configuration tools. Things have gotten a lot more regular between distros but there's still notable differences in philosophy between them, how much you notice kind of depends on how much of a power user you are and how prone to breakage your use-case and preferred applications are.
Distributions are like cars. They all get you from point A to point B, some of them will suit you less than others, and some people are really picky about which one they use for reasons.
Shifting on the wheel, floor, knob, buttons, etc. I've stuck mostly to Ubuntu/Debian based distros because I'm more comfortable with them and they have tended to be more sturdy/stable for my own usage (currently Pop COSMIC alpha though).
The main differences are related to packages. The package format (.deb, .rpm, etc), the package manager (dpkg/apt, pacman, dnf, etc), how frequently the packages are updated, if they focus on stability or new features, etc.
New Linux users that are used to Windows or Mac sometimes dislike a distro and like other, but actually what they really disliked what the desktop environment. For example, Kubuntu uses KDE Plasma as its desktop environment and its user experience are almost the same as Fedora KDE, Manjaro KDE, OpenSuSE and so on, while it's very different to the default Ubuntu (that uses GNOME). But, under the hood, Ubuntu and Kubuntu are the same (you can even uninstall KDE and install GNOME).
Actually, other Unix-based systems can install the same desktop environments that we have on Linux, so, if you have a FreeBSD with KDE you won't even notice the difference to Kubuntu at first, even though it's a completely different operating system.
tl;dr: there's a real difference, but from a user perspective it's mostly under the hood, not exactly in usability.
KDE seems to reinvent the wheel here and I wonder where they are going with that. There are pretty mature "immutable" distributions out there that could serve as a foundation and offer a lot of the same features that KDE Linux is supposed to support. For example, Aeon (of openSUSE MicroOS vintage) looks like all KDE Linux is aiming for, just with Gnome as DE.
There's a fair amount of overlap and collaboration in the engineering communities behind the different image-based/appliance OS projects, so it's not necessarily as redundant as you might think it is. E.g. the developers behind the distro tech behind KDE Linux, Gnome OS and Kinoite are pretty friendly with each other.
And of course the distros end up sharing the gross of the application packages - originally a differentiator between the classic distros - via e.g. Flatpak/Flathub.
One reason we're doing KDE Linux is that if you look at the growth opportunities KDE has had in recent years, a lot of that has come from our hardware partners, e.g. Slimbook, Tuxedo, Framework and others. They've generally shipped KDE Neon, which is Ubuntu-based but has a few real engineering and stability challenges that have been difficult to overcome. KDE Linux is partly a lessons-learned project about how to do an OEM offering correctly (with some of the lessons coming out of the SteamOS effort, which also ships Plasma), and is also pushing along the development of various out-of-the-box experience components, e.g. the post-first-boot setup experience and things like that.
Other than being immutable, I doubt it. Immutable distros tend to rely on flatpaks to dynamically install new packages. Unfortunately the flatpak codebase is largely unmaintained at this point, and nearly impossible to get changes merged in.
According to kde.org/linux it comes with Flatpak and Snap. Distrobox and Toolbox. They don't seem to just pick a lane to be consistent, it's all kind of random.
It's at an alpha stage; it's reasonable to see what people will use, also because having an immutable base and needing tools to install things on top is still somewhat new.
KDE and Gnome are footing Flathub together and a lot of the community effort goes into Flatpak packaging.
I did not realize anyone outside of Ubuntu used snap. When I was on Ubuntu, I had many annoyances with snap, but not sure if they have since improved the experience.
I wish them luck. But going waylands only instead of supporting X11 means they're throwing away all accessibility support that is integrated into all linux software. Their toy distro won't be ADA compliant and I certainly won't use it since it lacks screen reader support.
This has been hammered on by very prominent voices a lot. Stop making new "distros". Especially if you just want different defaults. You should be able to declare the defaults and apply them to your base distro, and if you can't there's your problem.
Most distros could be NixOS overlays. Don't like satan's javascript? Try Guix. Bottom line, the farther I get away from binaries discovering their dependencies at runtime, the happier I am.
And let's also imagine what compels people to recommend enthusiasts onto paths where they will be more successful.
Maintaining distros that are not some kind of overlay that can track the underlying base automatically is just asking for more maintenance than people will want to do while also Balkanizing options for users because while overlays can be composed, distro hopping very much does not compose.
There really is no such thing as a "new distro" these days. Everyone with the itch to roll their own is Debian or arch, with a tiny handful of cool kids hacking on nix instead. Scanning down:
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'"
Honestly find Debian Testing good enough for latest KDE Plasma. I have never understood the need for a specific distro for your desktop software and have never found Neon useful.
The only pain point I really found even developing for KDE on Debian was the the switch from qt 5 to 6 but that is always a risk and you can just compile qt from src.
Another pain point is their dev package manager doesn’t have a way to conveniently target library/package branches. So you can spend a fair amount of time waiting for builds to fail and passing in the library or package version to the config file. Very tedious and no doubt cost me lots of time when trying to build on top of Akonadi for example.
People on Arch are eating the bugs too. I think KDE would go MUCH farther if they just made their tooling a little easier and bundled that well enough. They wouldn't need a separate distro.
Ah, yes, the KDE people are definitely the people I trust most to deliver a reliable system and not go crazy chasing incongruent rewrites of things while abandoning what works...
I approve of this - Linux distributions need to go and they needed to go about 20 years ago. They are the fundamental reason why Linux is not successful.
Distributions are literally the worst thing about Linux - and by worst I really mean it in a way that is filled with the most amount of disgust and hate possible, like one feels toward a literal or social parasite.
Linux distros provide little to no value (after all these people just package software), they are just vehicles for petty losers to build their own fiefdoms, where they can be rules. They are (and the people who run them) acid on the soul, they poison the spirit of openness and sharing, by controlling who gets to use what
There existence was always political and the power they wielded over who gets to use and see your software was stomach-churningly disproportional to the value they provided.
Much like petty internet forums with pathethic power tripping mods, a given linux distro's maintainers get to decide that you, the dear programmer, the actual creator of value, gets to have his work judged, and his right to deliver his software to users by a distro maintainer a petty tyrant who might not have the time or might have some weird mental hangup about shipping your software. And even if they do, they might fuck up your package and the distro-crafted bugs will reflect badly on you.
I can shit on Microsoft and Apple all I want and it'll never impede my ability to deliver software to my users.
This is why open source failed on the desktop, and why we have three orders of magnitude more open-source zealots, and ignorant believers than actual programmers who work on useful stuff.
Why no one with actual self-respect actually builds software for the Linux desktop out of their own free will, and why garbage dumps and bugs and missing features persist for decades.
Imagine the humiliating process it takes for a dev to ship a package on Linux - first you have to parlay with maintainers to actually include your stuff. Then they add a version that's at best half-year out of date to jive with their release cadence. You're forced to use their vendored and patched libraries which are made bespoke for their use cases, and get patched for the 5 apps that they care about, and can break your stuff at a drop of a hat.
And no, you can't ship your own versions, because they'll insta reject your package.
This is literal Windows 98 dll hell, but Microsoft was at least a for-profit company you could complain to and they actually had a financial stake in making sure users software worked. Not so with Linux distros, they just wanna be in charge and tell everyone what they get to use.
Then you have
First, Ubuntu and snap should burn in hell. Much like their other efforts, they made an universal system that's hated by everyone and used by no one except for them and they keep pushing it with their trademark dishonest tactics copied from other dishonest vendors, like even if you get rid of the excrement that is snap, they keep reinstalling it via updates.
Flatpak was meant to work like a reasonable package manager would - you assume a stable OS base and demand and provide that, full stop. This is how Windows and Mac OS worked forever, and it doesn't even occur to devs that people using these OSes will have trouble running their software.
As I expected - downvoted but not countered - the zealot scoundrel shows his true face - his tools are not of reason but whipping his herd of loyal mouthbreathers and turning them against people who disagree with him.
So essentially people are abandoning the memory/speed efficiency of the .so ecosystem, and seeking exe/msi style convenience... You know... a dump of legacy dll/static-so-snapshot versions with endless CVEs no one will ever be able to completely fix or verify.
Flatpaks can have insecure permissions which are not only transparent but easily editable. Meanwhile native packages are guaranteed to have insecure/all permissions.
In general, SELinux profiles use Mandatory Access Control, and not Discretionary Access Control. However, most desktop users find it difficult to understand, and often have bigger problems from reading silly posts off the web.
An outdated old package library relies on people understanding/tracking the complete OS scope of dependencies, and that is infeasible for a small team.
If someone wants in... they will get in eventually... but faster on a NERF'd Arch install. =3
>most desktop users find it difficult to understand, and often have bigger problems
That is exactly the strong point of flatpaks. It's a lot easier to use toggle in a GUI for permissions than write whole new profiles. Not to mention that many even disable selinux because it is difficult.
>An outdated old package library relies on people understanding/tracking the complete OS
It takes 0 understanding to copy paste a outdated package warning and report that to the repo listed in flathub. It explicitly tells you as much.
Security/dependancy updates depend solely on the specific maintainers. The platform itself doesn't automatically fix the developer or maintainer lethargy in this regard.
Snap and Flatpaks only real legitimate use-case is legacy compatibility:
1. Current release applications on deprecated OS (Mostly good)
2. Deprecated applications on current OS (Mostly bad)
The Windows style packaging architecture introduces more problems than it solves. Fine for running something like Steam games with single shot application instances using 95% of system resources each power cycle, but folks could also just stick with Windows 11 if convenience and security-theater is their preference.
Some people probably won't notice the issues, but depends what they do. Arch Linux itself is a pretty awesome distro for lean systems. =3
>single shot application instances using 95% of system resources each power cycle
Source? There is no measurable energy or efficiency difference at least for flatpak on any semi recent hardware. I know that snaps do take couple seconds longer at first start.
I prefer flatpaks for proprietary and internet facing applications because of there easy sandboxing capabilities. There is also the advantage on archlinux not needing to do a full system update for a single application.
Getting into why the community argued for years while Debian brought up deb version controlled packaging is a long dramatic conversation. Some people liked their tar ball mystery binaries, and the .so library trend started more as a contest to see how much people could squeeze out of a resource constrained machine.
In a single unique application running context, the power of a cached .so reference count are less relevant. As a program built with .so may re-use many resources other programs or itself likely already loaded.
> ldd --verbose /usr/bin/bash
> ldd --verbose /usr/bin/cat
Containerization or sand-boxing is practically meaningless when punching holes for GPU, Network, media and HMI devices. Best of luck =3
>Containerization or sand-boxing is practically meaningless when punching holes for GPU, Network, media and HMI devices
Many applications don't need these permissions and even the ones that do will be much more secure than having full user space access by default.
Someone could exploit the system to gain more access vs someone does not need to do anything because they have full access by default. It's like arguing you don't need a root password because sudo is insecure anyway.
Not really, if some noob deploys janky code they don't understand, than someone will eventually worm it for sure. Containerization has not prevented an uptick in nuisance traffic from Cloud providers, but made it orders of magnitude worse.
Qubes, Gentoo, and FreeBSD are all a better place to start if you are interested in this sort of research. Best of luck =3
But also hilariously still paying the runtime cost of ELF dynamic linking instead of just static linking so at least you avoid, e.g. GOT indirection overhead.
Again, static linking would only be useful in a single unique App run and dump scenario. People do link and strip .a sometimes when porting to Windows and MacOS.
Some programs take a huge memory and performance hit on non-Linux machines. =3
> Some programs take a huge memory and performance hit on non-Linux machines
You're implying without stating it (or providing any evidence) that programs perform worse when statically linked than when assembled out of ELF DSOs, even when each of those DSOs has a single user.
That makes no technical sense. Perhaps you meant to make a different point?
An 8kB program loads and runs much faster if the .so it uses is already cached due to prior use.
A 34MB static built version will cost that amount of i/o every single instance on a system that did not cache that specific program previously. Also it will take up that full amount of ram while loaded every single time it runs.
Inefficient design, but works fine for other less performant OS =3
Also, static programs are demand paged like anything else. Files aren't loaded as monoliths in either case. Plus, static linking enables better dead code elimination and devirtualization than is possible with an AOT-compiled and dynamically linked setup, which usually more than makes up for the text segments of shared dependencies having been pre-loaded.
I'm not sure you have enough technical depth to make confident assertions about linking and loading performance.
> =3
The "blowing smoke" emoticon isn't helping your argument.
If a stripped static linked library saved that much space, than people probably chose the wrong library resources. Sometimes ripping off unreachable areas also has unintended consequences, but stripping debugging resources is usually safe.
If .so reuse is low, or the code is terrible... it won't matter much. Best of luck =3
KDE seems to be losing the plot here-- how does this help build the best possible DE for the community? I feel like they are fragmenting developer attention and time by futzing around with this.
Meanwhile there are issues that haven't been solved for months; the latest Plasma version has barely any decent themes (the online community theme submissions seem to be wrought with spam), Discover is not really useful, needs curation, settings and configuration is everywhere to be found which is great for the average power-user, but hard to know what you can tweak without being overwhelmed. Flatpak is great, but really needs improving, more TLC and work towards cleaning up. It's looking more and more like the Android App Store every day.
KDE needs to stop trying to be everything to everyone and start getting a little more opinionated. I'd rather have a few well maintained components of a DE than many components that are no better than barely polished turds.
In any case, it's my favorite DE and each/every KDE developers are absolute legends in my mind.
> KDE seems to be losing the plot here-- how does this help build the best possible DE for the community? I feel like they are fragmenting developer attention and time by futzing around with this.
A lot of the manpower working on this previously worked on KDE Neon, so it's perhaps better to think of it as a lessons-learned project that doesn't in fact do what you worry about (but it has already attracted new contribitors that also improve things elsewhere).
KDE also does serve users (and hardware partners) with Neon that deserve improvement.
There's also the fact that increasingly new users experience KDE software e.g. as Flatpaks on new distros that ship Plaama by default, e.g. Bazzite and CachyOS, and it makes sense to get more developer eyeballs on this to make sure it's a good experience.
After decades of development and billions of dollars in investments can we have just 1 distro that works as smooth as MacOS and then we can get back to having 2000 others for that one time we need to run it on a coffee maker
I don't know that that will happen- not even Windows is as smooth as MacOS. But that's because Microsoft and Linux developers are tackling a more difficult problem- getting an OS to work with effectively infinite hardware permutations. Apple has given themselves an easier problem to solve, with just a handful of hardware SKUs and a few external busses.
That said, Android is pretty stable, because a given Android distro typically only targets a small hardware subset. But I don't think that's the kind of Linux distro that most people contributing to FOSS want to work on.
Apple has also yanked backwards compatibility a few times. I bet Microsoft would love to trash a few legacy API decisions from decades ago.
That being said, I still think Microsoft should have developed a seamless virtualization layer by now. Programs prior to X year are run in a microVM/WINE-like environment. Some escape hatch to kill off some cruft.
I had to use it ~2 years for work and am glad that I am back to Linux. The amount of instabilities, bugs, lack of features or removed(!) features between updates, missing software packages, horrible user experience... was just astonishing. You need a lot of fanboyism to cope with that.
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is ""definitely not an 'Arch-based distro!'"" Pacman is not included, and Arch is used only for the base operating system.
So it's basically a SteamOS sibling, just without Steam?
Sounds like a good distro to use with your parents and grand parents, if they're not solely using iPads...
That might be their target audience.
What appeals to me about linux is the hackability and configurability. This takes it all away in some way, but that's not to say that they won't find a market for it.
Seems targeted at office workplaces. A locked-down system that cannot even be corrupted or tampered with. Consider a workplace of a receptionist at a medical office, or a library computer.
Linux is wonderfully flexible, which allows to create distros like that, among other things. Linux is also free as in freedom, which may be very important for trusting the code you run, or which a governmental official runs.
I bet that past the alpha stage they will offer a configuration tool to prepare the images to your liking, and ways to lock the system down even more. Would work nicely over PXE boot.
The problem for this use case is that certain businesses, like medical offices, use specialized software that is often Windows only.
More and more of this software is moving to the cloud and only requires a web browser. A distribution that is very difficult to break and can launch a web browser would already meet many use cases for receptionists, hotels, consultation stations, etc.
Yes, but doctors offices are still the last places in the US to use a fax machine.
The fax protocol provides a real-time recipient receipt. Email doesn't.
Seriously. That's the reason that fax is still popular in the medical industry.
If only a standard existed to do this... Hint: it exists since ages in Italy and it has been extended to Europe recently (See Registered Electronic Mail - RFC 6109 and ETSI EN 319 532 – 4)
Also the limitations of fax sort of end up being it's differentiator to email and it's biggest advantage. Not needing an email server is a big boon, not really being susceptible to phishing is a boon, and with modern fax over internet it's virtually indistinguishable in user experience from email.
I remember fax phishing even before I had ever heard of email. From many large companies, simply paying a sub $100 invoice was standard procedure without even checking with the other internal bodies.
This is true, but it's much less of a concern because:
1. You get way less faxes than emails.
2. Faxes can't steal credentials.
3. You should be auditing expenses anyway.
The United States is not the only country in the world. In France, it is almost impossible to make an appointment without using Doctolib, which is SaaS software for booking consultations (and lots of other things).
Same in Germany. Doctolib got popular very quickly, in just a fee years. Now it’s almost mandatory.
I am not a fan. It’s a big outage waiting to happen. It’s an enormous data breach waiting to happen. It will inevitable be enshitified.
Doctolib is not the problem at all. he real problem is the lack of government proactivity on these initiatives.
If the government had already thought about this in advanced (even in 2013 when doctolib was just starting out), then there could be very strong protectiosn for data which would then allay all of these concerns, and we might have had multiple players in this space.
The best use of Doctolib for me is that I can make appointments without having to speak perfect German on phone. I can make appointments in evening when I'm back from office and can relax a little bit. So, doctolib is a godsend for me as an immigrant here. and I'm guessing for a lot of people too. I can look up doctors who are available without having to bother the receptionist. This is much more efficient way of doing things.
Doctolib is a B2B model. Patients are not the customers; medical practices are the customers. Doctolib saves on the cost of a medical secretary, which is why it is so popular.
What's more, this is a sensitive and regulated field, where trust is essential. They can't afford to mess around if they don't want to quickly find themselves subject to moe restrictive regulations.
They were heavily criticised in France because they allowed charlatans and people with no medical training to register (particularly for Botox injections). As soon as this became known, they quickly rectified the situation.
> It will inevitable be enshitified. that only happens with the western venture capitalist model in private companies. doctolib makers already have income from all these government contracts instead of just relying on adverts and hype
Not just in the US, they‘re surprisingly popular still here in Switzerland. I‘ve written interfaces to fax gateways (convert incoming fax to pdf, extract metadata, save in DB) multiple times.
Germany here. Fax is king.
In that case, wouldn't ChromeOS actually make the most sense?
ChromeOS stops getting updates when your hardware gets a bit too old, at that point even your web browser is no longer updated.
That's ridiculous.
Because Chrome OS is offered on low-cost laptops that are unsuitable for office work.
What's more, it's Google, so we're not safe from a ‘Lol, we're discontinuing support for Chrome OS. Good luck, Byeeee.’.
Some offices still have bad memories of Google Cloud Print, for example. I'm not saying that being an early adopter of a distribution that's less than a year old is a good solution. Just that Google's business products don't have a very good reputation.
> Because Chrome OS is offered on low-cost laptops that are unsuitable for office work.
ChromeOS Flex exists, it is free of charge, and it runs on more or less any x86-64 computer, including Intel Macs.
Nordic Choice got hit with ransomeware and rather than paying, just reformatted most of its client PCs with ChromeOS Flex and kept going with cloud services.
https://www.bitdefender.com/en-us/blog/hotforsecurity/nordic...
Businesses seem okay using Google Chrome, Google Drive/Docs, and Gmail.
In my experience they're not, these are way less popular in enterprises as compared to Microsoft equivalents.
Being #2 with tens of millions of users is OK, you know. It doesn't mean you've failed.
Sure it's less popular. It came in under 20 years ago, competing against an entrenched superpower that was already nearly 30 years old back then. It's done pretty well.
The Google Apps for Business bundle has outsold by far ever single FOSS email/groupware stack in existence, and every other commercial rival as well.
Notes is all but dead. Groupwise is dead. OpenXChange is as good as dead. HP killed OpenMail.
Because ChromeOS is not an open base?
It is.
https://opensource.google/projects/chromiumos
https://www.chromium.org/chromium-os/
My medical devices run Windows due to specialised software. But at my medical office PC I use Linux: EMR and receipts through a web app on browser (locally hosted but it can be cloud), LibreOffice, Weasis Dicom etc
Wine/Proton gets better every day though.
Doctors have better things to do that learn Linux and Wine.
Their office buys their stuff from a supplier which ships them a Windows box with all the batteries included.
My non-software engineer friends have better things to do than learn Wine, and yet they use it everyday when playing games on their steam deck, unaware of its existence.
And that supplier could decide to bundle their box with such a distro, if this can save them money either due to licencing or better stability (=less support).
It is possible for somebody to make this into a workable bundle targeting specific professions/environments. A doctor would not care if double clicking X icon open an app through wine or not.
nice idea but enterprise cant rely on discord for tech support, they need stuff that works, and to be able to get it fixed when it doesnt
Jira on-prem and cloud works just fine on Linux. My experience is support tickets usually go through there. And then calls and stuff are on zoom or maybe teams - both also work on Linux.
Wine makes for zero difference in how the application looks and behaves, that's the point.
Until there's a bug in Wine that affects the software that you use or new update of your software that uses stuff incompatible with Wine.
For games? Yes (with some very major caveats). Non basic applications not so much.
Are you working as a doctor? Or do you work in tech?
Are you a doctor?
I happen to have started my career doing IT support for doctors and veterinarians...
That seems like a good niche to exist indeed and many people would probably misunderstand its purpose by it being called a “KDE distribution”. It would perhaps have been better if it were created by some independent group for this purpose and just happened to settle upon KDE as its interface, or rather offer multiple choices to be honest.
I disagree, KDE needs both a distro and a niche for that distro to fill:
> KDE is a huge producer of software. It's awkward for us to not have our own method of distributing it
No, KDE does not need its own distro, that's the issue. They don't need their own method to distribute it which benefits no one.
The idea of a distribution for this specific purpose is best left in the hands of some organization with experience with this specific purpose, not KDE whose experience is developing desktop environments.
How exactly is it “awkward” for them and how exactly does distributing this in any way improve the development process of KDE? They can't even dogfood it obviously.
Plasma[1] is a desktop environment made by KDE, who also makes lots of other software. They make stuff like Dolphin (file manager), Konsole (terminal emulator), and Partition Manager as OS basics already[2].
[1]: https://kde.org/plasma-desktop
[2]: https://apps.kde.org/
What you want may be an "immutable" distro (KDE Linux also is). And there have be some immutable distros now. Such as Fedora Silverblue.
It doesn't necessarily take much hackability away. You might find it makes it easier.
You can overlay changes to the read-only rootfs using the sysext mechanism. You can load and unload these extensions. This makes experiments or juggling debug stuff a lot easier than mucking about in /usr used to be.
A lot of KDE Linux is about making updates and even hackability safe in terms of making things trivial to roll back or remove. A goal is to always be able to unwedge without requiring a reinstall.
If you know you can overlay whatever over your /usr and always easily return to a known-good state, hackability arguably increases by lowering the risk.
This overlay feature sounds attractive. It bothers me that there is no easy traceability or undoability when I perform random system-level Ubuntu configuration file edits to make things work on my system. Maybe I'm doing it wrong. Sure I could do the professional sysadmin thing and keep a log book of every configuration change, or maybe switch to NixOS and script all my configuration changes, but something with lower effort would be welcome. Ideally you want the equivalent of "git commit -m<explanation>", "git diff" and "git log" for every change you make to system configuration.
CachyOS and openSUSE have you covered with btrfs and snapper pre-configured to take snapshots before/after doing potentially damaging things (and, of course, you can make them manually, whenever the thought occurs to you that you're entering the "danger zone"). You can boot into a snapshot directly from the boatloader, then rollback if you need to.
Immutable distros just one-up that by trying to steer the system in a direction where it can work with a readonly rootfs in normal operation, and nudging you to take a snapshot before/after taking the rootfs from readonly to read-write. (openSUSE has you covered there as well, if that's your thing; it's called MicroOS).
Both of those distros use KDE by default, so the value-add of KDE having its own distribution is basically so they can have a "reference implementation" that will always have all the latest and greatest that KDE has to offer, and showcase to the rest of the Linux world, how they envision the integration should be done.
If I were to set up a library computer or a computer for my aging parents, I would choose openSUSE Leap Micro with KDE, as that would put the emphasis on stability instead.
There's also https://getaurora.dev/ - another immutable KDE-based distro. I've been using it as my daily for ~half a year now. It just works.
> Ideally you want the equivalent of "git commit -m<explanation>", "git diff" and "git log" for every change you make to system configuration.
If you already commit all your changes, anyway, what keeps you from using Nix and running one more command (`nixos-rebuild switch`)?
I keep my /etc under Git. When the system does changes automatically (via an update or whatever), I make a Git commit with a special distinct message, and so I can easily filter out all my own changes.
Etckeeper does that for changes to /etc https://wiki.archlinux.org/title/Etckeeper
> something with lower effort would be welcome
This is a major reason I ended up with https://getaurora.dev. I layer a few things, but it comes with bells and whistles (like NVIDIA drivers, if you need that).
I can't see myself going back to a "normal" distro. I don't want to spend time cosplaying a sysadmin, I have things to do on my computer.
> It doesn't necessarily take much hackability away.
It doesn't, though - as evidenced by my Steam Deck - it adds enough friction to make me not bother most of the time.
I think Aurora Linux[1] is more suitable for this purpose.
However, while I love the approach of having an immutable distribution, I don't see the attack vector of ransomware handled in a good way. It does not help, if your OS is intact, but your data is irrecoverably lost due to a wrong click in the wrong browser on your system.
I think the backup and restore landscape has enough tools to fix this (cloud + restic[2] or automated ZFS snapshots[3]), but it takes a bit time / a script to setup something like this for your parents in your favorite distro.
1: https://getaurora.dev/en
2: https://github.com/restic/restic
3: https://zrepl.github.io/
I have just checked, and Aurora Linux does not offer support for any Nvidia card older than 16xx.
Looks like they used to, so they have removed the option.
Strange since Bazzite still has 900&1000 driver options.
Building your own is an option https://github.com/ublue-os/image-template
I am willing to try an image officially supported but definitely I am not building my own to run a computer for my mom given that w10 supports ends, don't have the spoons nor the time for that.
But I guess it is best to have the option that not to have it.
If this is related to the split in Mesa for "Gallium" and "non-Gallium" support, you could try installing the amber branch. Older nvidia video cards are still supported that way.
However, the only distro I could find where it actually worked was Chimera. Not the gaming-related ChimeraOS but the from-scratch LLVM-compiled all-static APK and Dinit distro with a hodgepodge userland ported from the BSDs.
It's rolling release though so it'll happily install the latest bugs. But it probably does that faster than any other distro.
I mean, nothing stops you from building your image of KDE Linux (or any immutable distro) with a built-in restic config.
This is more about preventing the user from messing up their computer than it is about data safety.
I've been using Bazzite for 2 years now (an immutable distro based on Fedora Silver blue) and I just love the fact that I can "unlock" the immutability to try something that could mess up my systemd or desktop environment, and I can just reboot to erase it all away.
I also have a github action to build my custom image with the packages I want, and the configuration I want.
And this makes adding a backup setup even easier, it can be baked-in the distro easily with a custom image ! Your grandparents don't have to do anything, it will auto update and auto apply (and even rollback to the n-1 build if it fails to boot)
> nothing stops you from building your image of KDE Linux
Isn't the main point that you delegate curating and building the system image to the KDE project?
No, the main point is they provide a reference image using mkosi, and you can clone kde-linux and trivially make spins. At some point I expect just about everyone is gonna find a spin which scratches all their itches and which they are devoted too.
> I mean, nothing stops you from building your image of KDE Linux (or any immutable distro) with a built-in restic config.
I hear you. The problem is, that basically nothing stops you from building anything yourself. The difference is, that there is no easy-to-use build-in solution (like time machine) and ease of use is what makes the difference. Especially a TIME difference. Of course there is software SIMILAR to time machine, but it seems to be hard to write something rock solid and easy-to-use.
In fact I also have built it myself: https://github.com/sandreas/zarch A script that installs Arch on ZFS with ZFSBootMenu and preconfigurable "profiles" which packages and aurs to use. Support for CachyOS Kernel with integrated ZFS is on my list.
I already thought putting together a Raspberry PI Image that uses SSH to PULL backups over the network from preconfigured hosts with preconfigured root public keys and is easily configurable via terminalUI, but I did not find the time yet :-) Maybe syncthing just is enough...
> However, while I love the approach of having an immutable distribution, I don't see the attack vector of ransomware handled in a good way
The phylosophy of security in "modern" OSs is to protect the OS from the user. The user is evil and, given so many rights, it will destroy the (holy) OS. And, user data ? What user data ? /s
> Sounds like a good distro to use with your parents and grand parents, if they're not solely using iPads...
THIS!
I was pondering putting Linux on my father's ancient machine (still running Windows7; or migrating him to something slightly newer but win10/win11 doesn't rub me the right way) but I was weary of "something wrong happening" (and I'm away right now).
And having immutable base would be awesome - if something goes wrong just revert back to previous one and voila, everything still works. And he would have less options to break something…
It makes hacking easier in some ways too. Overlay any hacks. It will be gone by reboot unless you want otherwise. Also see blue-build.org <- It helps you to put all your hacks in the immutable image.
> What appeals to me about linux is the hackability and configurability.
Innovation happens on stable foundations, not thru rug pulls.
Yes, you have the freedom to make your system unbootable. When Debian first tried to introduce systemd, I've replaced PID 1 with runit, wrote my own init scripts & service definitions, and it ran like this quite well, until... the next stable release smashed me in the face.
It's absurd how hackable the Linux distros are. It's also absurd to do this to your workhorse setup.
I don’t mean this as a gotcha, but have you tried an immutable/atomic Linux distro?
Immutable/Atomic Linux doesn’t take away any ability to hack and configure it. It’s just a different approach to package and update management.
There really isn’t anything you fans do with it that you can do on other Linux distros.
I’m using Bazzite which is basically in the Fedora Atomic family and all it really changes is that if I want to rpm install something and there’s no flatpak or AppImage then I just need to decide on my preferred alternate method to install it.
I find Bazzite’s documentation on the subject quite helpful: https://docs.bazzite.gg/Installing_and_Managing_Software/
At the very worst case I’m using rpm-ostree and installing the software “traditionally” and layering it in with the base OS image.
Now you might be thinking, what’s the benefit of going through all this? Well, I get extremely fast and reliable system updates that can be rolled back, and my system’s personalization and application environment is highly contained to my home directory.
I’m not an expert but I have to think that there are security benefits to being forced into application sandboxing as well. Applications can’t just arbitrarily access data from each other. This isn’t explicitly a feature of immutable/atomic Linux but being forced into installation methods that are not rpm is.
The immutable aspect and the way the OS is designed sounds like a good fit for that.
But I wouldn't use KDE for the typical cliched (grand)parents: it's just way too complicated for someone who's doesn't have high proficiency in tech.
I like hacking Linux too.
But some people just want a computer to work.
It's not like you can't try a simple distro and move on to something more complex later.
>That might be their target audience.
Seems like a lot of effort and fanfare for such a niche market.
That "niche" market of ageing parents with legacy hardware is much bigger than the nerd hacker market of Arch linux.
The difference is there's a lot more HN-like users who will go out to run Arch, than ageing people installing who will go out to install Linux instead of getting a iPad/tablet.
If a distribution is immutable (and thus omits the package manager) and pre-configured for a specific purpose (here, ensuring that KDE works), how much does the base really matter?
It sounds like how ChromeOS is Gentoo based but does not ship the package manager.
Your telling me google uses Gentoo for ChromeOS but doesn't even host a Gentoo mirror? jeez...
If that's true I think its genuinely disrespectful. Truly.
Won't someone please think of the multi trillion dollar company?
It does I believe? I've never tried it myself but I've heard multiple voices say that once you go into the terminal the entire Gentoo stack is just there with portage, equery, qapps and such.
In fact, from what I understand it is in fact not really Gentoo based but Portage-based, as in they for the most part write their own ebuilds and software and from what I know have their own custom init system and display system that's not in Gentoo but they found that Portage was simply very convenient for automating their entire process. The claim that “gentoo is just Portage” is not entirely true, there's still a supported base system that's configured as offered by Gentoo but it's far more flexible than that of most systems of course, granting the user choice over all sorts of fundamental system components.
> So it's basically a SteamOS sibling, just without Steam?
Excellent summary. Yes.
Hopefully they also integrate SteamOS/Proton and easy Wine configs and they might have a winner.
Bazzite is more general purpose example like that.
But without the steam side
SteamDeck also ships with pacman.
> [everything is] installed using Flatpak.
How's Flatpak doing in terms of health of the tech and the project maintenance?
Merely 4 months ago things didn't look too bright... [1]
> work on the Flatpak project itself had stagnated, and that there were too few developers able to review and merge code beyond basic maintenance.
> "you will notice that it's not being actively developed anymore". There are people who maintain the code base and fix security issues, for example, but "bigger changes are not really happening anymore".
[1]: https://news.ycombinator.com/item?id=44068400
Flatpak and Snap always seem to be in the "just give us 6 months and we'll have everything fixed" phase. It's been the same for 7 or 8 years at this point.
On a desktop, I nowadays actually somewhat prioritize flatpaks. I can get recent versions, sandboxing and the configs and data are always in standard locations with predictable naming. They can be installed for user in home dir without root and are easy to move over in case of OS reinstalls.
Flatpak works pretty well. I try to prioritize my distribution's repositories but some software is not packaged. I've taken the easy way out and installed the flatpak. I guess I could go and package them, but I've been too lazy so far.
I think the fact that you try to prioritise the distros repos shows that it probably isn't quite ready. Presumably that's because you know that they'll work reliably but you aren't so sure about Flatpaks.
I can't speak for GP but the number one reason I prefer my distro's repos over flatpaks has nothing to do with Flatpak as a technology.
Most distros have a fantastic track record of defending the interests of their users. Meanwhile, individual app developers in aggregate have a pretty bad one; frequently screwing over their users for marginal gain/convenience. I don't want to spend a bunch of time and energy investigating the moral character of every developer of every piece of software I want to run, but I trust that my distro will have done an OK job of patching out blatantly user-hostile anti-features.
For Flatpak, I use vscodium to strip Microsoft telemetry out of vscode.
It works really well, the one downside is that vscode extensions are pretty intrusive. They expect system provided SDKs. So you then have to install the SDKs in the Flatpak container so you have them. If vscode extensions were reasonable and somewhat sandboxed that wouldn't be a concern.
All that is to say, Flatpak works well for this purpose too.
I recently installed Debian 13 and went with the default partition sizes for /, /var, swap etc. I had two flatpaks installed and my entire /var partition was filled up with 10gb of flatpak data. Frankly very bad default partition sizes and I should not have been so trusting, but flatpak is an unbearably hot mess.
Flatpak installs and shares runtimes. That's what makes it so stable, regardless of your distro.
So yes, if you install 1 KDE app from Flatpak, you will have the KDE runtime. But that is true if you install 1 KDE app while on Busybox as well. It's the subsequent KDE apps that will reuse the dependencies.
If those apps are built against the same runtime version
Which is often not the case. For those of us with slow internet connections, flatpack take hours to download programs that would otherwise take seconds.
And for those of who administer lots of systems, it means I have to track all of the bugs in multiple runtimes.
How many versions of openssl are on my Silverblue laptop? I honestly couldn't tell you.
That's the entire draw of Flatpak - I can have applications with out of sync libraries and they just work. That's a big big headache with system provided packages.
I don't think Debian creates a separate /var by default, only /, /boot, swap, and uefi.
It defaults to one / for it all, but if you tell it not to it will suggest partition sizes for you. Regardless this is definitely self-inflicted.
Absolutely. I should have verified partition sizing, and I should never have allowed even one flatpak. That doesn't make Debian default sizes and installation process anywhere close to good.
Why, of all root directories, would you skimp out on /var? It literally stands for variable data.
Because it isn't used for much? It's mostly just logs these days. Most data on most systems goes in /usr or /home. I would say the weird thing here is that Flatpak puts runtimes in /var by default instead of ~/.cache or something like that.
User-mode Flatpaks keep things in ~/.local/share/flatpak. This person simply installed a Flatpak in system-mode, which puts it somewhere other users could also run it (i.e. not your home directory).
Libvirt virtual machines are also stored there.
Where do you think docker containers are installed?
Ask the Debian maintainers. That was their recommendation, and I trusted them - presuming they would recommend something that would work more than two weeks on a rather standard laptop installation. I will have to re-partition within the next year, because their / partition is too small as well.
But the default is to just use / no? So you did not trust them.
I think this happens because the default option is “recommended for new users”. So some not-new users believe that the other options are better for them.
That default options reads like this: “All files in one partition (recommended for new users)”
No, they make more than one recommendation - including which partitions to make and the sizes for each of them should you opt into their separate partition path in the installer. So they have defaults for multiple partitions and partition sizes - and I trusted them to have thought them through.
Two improvements that could be made: 1) Easy: put a brief Note in the installer indicating what might fill up the partitions quickly so people can have a heads-up, do a little research, and make a better decision. 2) moderate: still keep the Note, but also check the disk size and maybe ask which type of workload (server, development, home user), then propose something a bit more tailored.
Why not just use the default, instead of separate partitions for everything? This is not a 30-year-old BSD.
For better control over permissions:
```
/ / ext4 defaults 1 1
/home /home ext4 defaults,nosuid,noexec,nodev 1 2
/tmp /tmp ext4 defaults,bind,nosuid,noexec,nodev 1 2
/var /var ext4 defaults,bind,nosuid 1 2
/boot /boot ext4 defaults,nosuid,noexec,nodev 1 2
```
>I should never have allowed even one flatpak
I don't think that's the best conclusion: these days, disk is cheaper than it has ever been, and that "foundational" 8 GB will serve all the Flatpaks you want. Installing apps from packages sprays the same dependency shit all over your system; Flatpak was nice enough to contain it, so you immediately noticed it.
Flatpak is a good idea.
It's at best a mixed bag which makes it harder to fine-tune apps on your system and with limited security benefits (which again become harder to improve yourself).
https://madaidans-insecurities.github.io/linux.html#flatpak
Mind sharing which two apps you went with?
When installing just two apps, even if both are in the same (KDE or GNOME) realm, you can very easily end up with 8 flatpaks (including runtimes) or more. This is due to a variety runtimes and their versions: One for KDE or GNOME Platform release (about two a year) plus a yearly freedesktop base) and not all apps being updated to the latest runtimes constantly.
You then have to add at least 6 locale flatpaks to these hypothetical 8 flatpaks.
Especially with Debian, locales matter, of you don't do a `sudo dpkg-reconfigure locales` and pick what you need before installing flatpaks on a default install, you will get everything and thus gigabytes of translations you don't even understand, wasting your disk space.
I recommend always using LVM so you can grow/shrink filesystems easily.
don't know why people are obsessing with the partition scheme interest of two apps using as much of a windows10 installation.
my full / for a desktop debian with ton of stuff is under 4gb.
That was what was insane to me. I expected a couple hundred mb each for my first couple of apps. Not a pleasure in itself, but I was blindsided by the 10gb. The apps were clearly also part of the problem - they should not have so many dependencies. However even after I removed them, flatpak was using 8gb+, I had to purge it to reclaim space. That is why I called it a hot mess.
Did you install the flatpaks for a user or system-wide?
Yeah my one experience with installing things through flatpak is that it breaks them when it updates itself, upon which they can't launch until they're updated as well. And then for some reason errors out when trying to update them. Sigh.
Yeah leave this thing to die in peace.
Personally I'm interested in distros with an immutable base system. After decades of a lot of tinkering with all sorts of distros, I value a stable core more than anything else. If I want to tinker and/or install/compile packages I can do so in my $HOME folder.
In fact, this is what I've been doing in other distros, like Debian stable, nevertheless I have no real control of the few updates to the base system with side effects.
This is not the first immutable distro, but it comes from the people who develop my favourite desktop environment, so I'm tempted to give it a try. Especially as it looks more approachable than something like NixOS.
Aren't "immutable" distributions really just glorified "live CD's"? Not really seeing the point of them, tbh. It means that users will have to build a custom system image or fiddle with FS overlays just to do system management tasks that are straightforward on all other systems. The single interesting promise of "seamless" A/B updates is a vacuous one, that you could address more effectively on traditional systems by saving a temporary FS snapshot; this would even deduplicate shared data among the two installs, which is very hard to do with these self-contained images.
It has always been rather insane to me that user facing applications share packages with the base system.
The atomic distro approach works a lot better for me. Would not go back to a "normal" distro from https://getaurora.dev.
Lots of Linux users hate it, but as a one-time Linux user (about a decade as my main desktop OS) who now does 100% of important computer use on macOS or iOS, I find the division of “stable macOS base all the way through a working and feature-complete GUI desktop; homebrew (and the App Store) for user software; docker and language-specific package/env managers for development dependencies” to be basically perfect. Trying to use linuces where the base system and user packages are all managed together feels insane, now.
It depends a lot on the distro and how volatile it is and what tools are available.
I run Debian stable, and it's not immutable, but it is very unchanging. I don't worry much about system libraries and tooling.
The downside to that is that then userland application are out of date - in enters Flatpak. I run most GUI applications in flatpak. This has a lot of benefits. They're containerized, so they maintain their own libraries. They can be bleeding edge but I don't have to worry about it affecting system packages. I also get much simpler control - no fiddling with apparmor, the built-in Flatpak permission system is powerful enough.
The blind spot then is CLI apps and tooling. Usually it doesn't matter too much being bound to system packages, but if it really does, I can always containerize those too. I only do it for my PHP dev environment.
>The blind spot then is CLI apps and tooling. Usually it doesn't matter too much being bound to system packages, but if it really does, I can always containerize those too. I only do it for my PHP dev environment.
Do you encounter any friction getting different containerised tools to talk together? Can you compose them in the classical unix fashion?
I put them on the same container, basically. I bundle my PHP binaries, modules, and tooling all together and then use it on container. It defeats the purpose a bit, but it keeps the system clean. No NPM packages cluttering everything, no composer packages leaking, etc. Cross container, I'm sure, is more complex.
> I run Debian stable, and it's not immutable, but it is very unchanging. I don't worry much about system libraries and tooling.
I basically did the same with Tumbleweed for a couple of years. Can't stand the point release distros. Lagging behind a year or two when it comes to the DE is not something I fancy. Never liked Tumbleweed much though. Felt very unpolished when using Plasma.
> The blind spot then is CLI apps and tooling.
I can really recommend homebrew. Works well. apt is for the system, homebrew is for user facing CLI apps. :)
I did the same. I have a debian stable. Everything else besides build-essential, firefox and gnome is either docker, homebrew or flatpak
This. I always dreamed of a Debian / Ubuntu Distro where you could fully separate the "SYSTEM ENV" from the Userland ENV with userland referring to system ENV and then if userland ENV has a different, USerland takes presedence when its the user running the command, vs if its some automated service. I do not know if there's something like this for Linux outside of maybe containers? I guess that comes sorta close.
I think GoboLinux might scratch that itch.
Apps are bundled and installed like they are on macOS, and there's a very strict distinction between literal 'System', 'Users' and 'Programs' directories.
Try it before you criticize it.
> It means that users will have to build a custom system image or fiddle with FS overlays just to do system management tasks that are straightforward on all other systems.
What system management tasks? /etc and /var are usually writeable, which is all you need to configure the software on your system. Overlays are for installing new software on the base system, which is only really necessary for something like nvidia drivers because all other software is installable through other means (it's also usually a trivial process). Even if you don't want to use containers, you can use a separate package manager like Homebrew/Nix/Guix/Pacman/etc.
It requires a bit of a mental shift to adapt to if you're only used to traditional systems. It's kind of like the move from init scripts to systemd: it's objectively an improvement in all the ways that matter, but cultural/emotional push back is inevitable :)
I have been using Aurora DX for the last month, and it has been a good experience but has also required a shift in my thinking.
If anything is not included in the base image, you have a few options:
I have a few things in distrobox containers, things which aren't available as flatpaks. The biggest hurdle, for me, was getting wireshark running since the flatpak version can't capture traffic. I had to make a root distrobox container and export the app to my desktop. It works, but there were definitely some hoops to jump through.I like that updates come through once a week and they aren't applied until I reboot. If I have problems, it is easy to roll back to what I was running before.
I would be comfortable giving my parents an Aurora setup, knowing that they can't easily break it.
I use Bazzite, which ships with the homebrew package manager. Idk if wireshark is available on homebrew, but if it is then you'll be able to use it that way without having to deal with any issues related to containers. Nix is probably another option (you can use Nix as a package manager instead of a distro)
You could also build it from source, although that's definitely more work.
Immutable systems such as this one and Fedora's Atomics and CoreOS/Flatcar have their uses. Whether they make sense for you or for general desktop OSes is another question, but there are many situations where the approach makes a lot of sense.
Really, I don't see a lot of difference between immutable desktop OSes and Android or iOS. That model is not necessarily a bad one when you're rolling out systems that you don't expect the user to need to fiddle with the system management tasks you refer to. If I have 1,000 laptops to manage for a corporate environment, say, or for non-technical users who are not going to fiddle with drivers but might want to install Inkscape (or not).
NO!!!! They are in practice more about keeping the core OS very small and stable and putting all packages outside.
Or you can try to install whatever custom packages you need under $HOME, without the need for any special permissions or FS overlays? But yes, saving snapshots is also a good solution.
I guess immutable distros such as this one target people who don't need much customisation and mostly just need what's already there anyway.
The advantage of immutable distro over custom OS snapshot is that everyone is booting off the same images. It makes support manageable because behaviors are much more likely to be reproducible. This is what stability is about, not just local system image.
I understand that disks snapshots with ZFS for example can cover most part of the needed on recovery scenarios.
But immutable OS are helping in progress some sandbox tools and allowing new workflows to manage the OS (virtualized or not).
>just to do system management tasks
End users should not have to do system management at that kind of low level. They should be able to focus on accomplishing what they actually want to do and not have to maintain the system themselves.
>you could address more effectively on traditional systems by saving a temporary FS snapshot
That's an implementation detail. Every modern OS uses essentially snapshots for A/B updates to avoid wasting storage space.
There's Arkane Linux which aims to be atomic as well, and the maintainer snapshots the packages every few days after testing. It's currently mainly managing / focusing on one DE but I could see it including KDE etc in the future if enough volunteers join in. I havent given it a shot yet, I quite love EndeavourOS as is.
I didn't know this one, thanks. Looks interesting as well.
I switched to Fedora Kinoite about two years ago and it's been a great experience. Updates are mostly invisible to me, I only layer a handful of packages (zsh, fzf, distrobox) and I do development inside of distrobox containers so I don't have weird build dependencies in my base system.
Desktop apps are all Flatpaks, including Steam.
Edit: This comment has been downvoted into the negatives? Did something change about HN culture?
I switched to https://getaurora.dev, also two years ago, and I'm not going back to a "normal" distro.
Can recommend Bazzite, Bluefin and Aurora which are derived from Atomic Fedora but come with niceties like distrobox and NVIDIA drivers (if you need them).
I tried Bazzite and it was absurdly, perceivably slower than Garuda on the same hardware. Either the immutable distro thing has way too much unavoidable overhead, or their Nvidia image is not tuned for desktop use.
TIL about distrobox. It seems like a really neat way to use containers with good host distro integration.
If I were to guess, maybe people dislike Flatpack in general? At least that seems to be the case on Reddit’s /r/linux
Been using Bazzite and Project Bluefin on some refurb Dell 7200 2-in-1 I recently picked up and they both work great and really enjoying the experience. They are both part of Universal Blue
So you more or less want a BSD system. Have you tried them? They are a joy to use, can have far better performance than Linux, and have nice and predictable upgrade schedules. The base system is small and very usable out of the box. And documentation tends to be excellent.
In other words, with your requirements what are you still doing on Linux?
I've been curious about BSD in the past -- the thing that stops me is that I like to play with software that requires containers (docker) and I'm not sure if I'd ever get used to the difference between the core gnu cli utils.
The other thing that worries me is that I've had a lot of trouble building software that mainly supports BSD from source on linux machines. I'm worried if I switch to BSD, a lot of the software I want won't be available in the package manager, which would be fine, but I'm worried that building from source will also be a pain and binary releases for linux will not be compatible. Sounds like a lot of pain to me.
I'd be happy to be corrected if these are non-issues though.
You can install GNU coreutils out of the package repo no problem. Software packages are mostly available except closed source stuff that is Linux-only at which point you would use the Linux compat layer and it mostly works.
Docker is Linux-only for now but there is movement in that area. BSD had jails first and there is work on making generic containers work across multiple operating systems, including BSD. But I think the point of using BSD is to not bring Linux with you. Try their way and see how you like it.
A long time ago I have used all BSDs and loved them. Eventually the performance of Linux hooked me back, but I guess it's always a good time to go BSD again. I miss the predictability of the upgrades.
What are you talking about? Are you emphasising "bad" for effect, or is this an acronym? Please give examples.
Googling "bad operating system" returns useless results.
Autocorrect strikes again. I fixed it and it sent it right back. I swear it works better on coffee.
He probably typed BSD, and it got autocorrected to BAD.
Gentoo really missed an opportunity there.
They could have been the “Build Always Distribution” (BAD)
I think they meant BSD, but took me a bit.
A few years ago I switched to KDE and the experience has been so absolutedly seamless and good, and the upgrade to Plasma 6 took some time to propagate down to distros it was well worth the wait!
It seems to be that a project like KDE might be in a very good position to make a very competitive distro simply because they are starting from the point of the user experience, the UI if you will. Think M$ windows, it IS GUI, and fully focused on how the user would use it (I'm thinking the days of XP and Win 7).
A KDE distro might be less encumbered with "X11 vs Wayland" or "flatpak vs <insert package manager name here>" discussions and can fully focus on the user experience that KDE/Plasma desktop brings!
I'm looking forward to take this for a spin!
That's exactly what's compelling to me as well. As an absolute fan of KDE and all its features, as well as stability. Who better to seamlessly integrate everything around a KDE desktop than themselves? KDE neon had potential as well, but I really like the notion of an immutable base system and less surprises during an upgrade.
this bit is a no-go for me. they've decided what goes in the immutable base os and allowed a set of kde apps citing subpar experience flatpak versions. I'm guessing they haven't tested all flatpak apps as they tested their apps.
"Well, we’re kind of cheating a bit here. A couple KDE apps are shipped as Flatpaks, and the rest you download using Discover will be Flatpack’d as well, but we do ship Dolphin, Konsole, Ark, Spectacle, Discover, Info Center, System Settings, and some other System-level apps on the base image, rather than as Flatpaks.
The truth is, Flatpak is currently a pretty poor technology for system-level apps that want deep integration with the base system. We tried Dolphin and Konsole as Flatpaks for a while, but the user experience was just terrible."
https://pointieststick.com/2025/09/06/announcing-the-alpha-r...
Nathan (who is a QA person with user-visible breakage ever-present on his mind) is talking about the alpha and the present-day situation, which naturally isn't set in stone. KDE is a Flatpak contributor. One of the little skunkworks projects within KDE Linux is even exploring further evolution of Flatpak that would allow putting Plasma itself into one, etc. This is an ongoing story, you shouldn't assume dogma.
They are both admitting that Flatpak gives a terrible user experience and making Flatpak the only way for users to install apps.
Strange design.
They admit
> Flatpak is currently a pretty poor technology for system-level apps that want deep integration with the base system.
Therefore they ship those apps on the base image, rather than as Flatpaks. I don’t see what’s wrong with this approach.
KDE Ark is a graphical file compression/decompression utility. It's not system app and does not require deep integration with the base system. It's a bit strange choice of apps to include to system image.
Which is odd. Windows was able to browse ZIPs like normal folders since... 98? XP? Can't remember now.
IMHO KDE delegates too much core functionality to apps. On macOS, I can press "space" while having a file selected and I get an instant preview. This sort of thing must not be delegated.
Is this true? I was under the impression windows wasn't able to decompress zip files natively till very recently, like windows 11. I could be remembering wrong.
Yeah it's been supported since at least Windows 7. I think XP sounds about right.
At the very least it does add context menu entries for compression to files, apart from "open with" obviously. That might already the reason right there.
So I can't install an app that adds context menu entries? I can do that on Windows.
That likely depends on the desktop environment. I have packages installed on my steam deck that add context menu entries, so clearly it's not impossible (my system still remains read-only, though I've been thinking about using an overlay like rwfus to get some new native packages, due to annoyance of self-management of self-built and downloaded ~/.local stuff)
Yeah obviously. Windows let's everybody and their dog write into the registry.
Which goes completely against the kind of immutable and sandboxed system that KDE Linux intends to be.
"Skate to where the puck is going"
They are betting that Flatpack is the future, even if the present experience is subpar.
Problem is, it has been subpar for some time already..
Sure but that only works if the puck is actually moving, which apparently it isn't. https://lwn.net/Articles/1020571/
Note: not Dolphin the GameCube+Wii emulator but Dolphin the file-browser/manager (a KDE native)
Not Dolphin the Smalltalk, either.
http://www.object-arts.com/dolphin7.html
I would be surprised if anybody who ever used kde would confuse the two.
This definitely looks like a system intended to be configured by an administrator, not the user. It shouts "secure office use", much like Silvetblue.
Installation is exceptionally easy, other than timezone, install disk and user account.
I'd also expect installing flatpaks offline would be a hassle.
> KDE Linux is Wayland-only; there is no X.org session and no plan to add one.
Does this mean they're testing that all the Wayland bugs are fixed? I haven't updated to the new Debian stable quite yet but all the previous times I've switch to Wayland under promises of "it's working now" I've been burned; hopefully dogfood helps.
The issue is that you are using Debian stable. Software quickly becomes out of date, sometimes by years, with the exception of security fixes and occasional backports.
Wayland, KDE, and several other pieces of software evolve rapidly. What may be broken in one release will very likely be fixed a few releases after the last debian stable release.
I'll run Debian on a server if I need predictability and stability with known issues. I won't run Debian on a desktop or workstation for the same reason.
I've tried distros with faster cadences. All that means is that I get an endless stream of new bugs, rather than a few that I can find workarounds for (such as just reverting to the still-good X11).
I worked with a guy who railed against the conservatism of our company's releases. He said "new software has more bug fixes." Then again, he was maybe a kind of hardcore software quality guy -- not the sort to add "features" to a piece of infrastructure that had demonstrated its worth.
The only issue I have with software conservatism, like Debian, is that some new thing requires something newer. If you live in a world where you can do without the new thing, then it's really quite nice. Security patches are another matter, but are usually dealt with.
I like to be on the bleeding edge, but Debian was created for a reason. Only time can determine which configurations don't suck.
> All that means is that I get an endless stream of new bugs, rather than a few
For some obscure reason, bugs are easier to produce than fixes. But the next release will be better. I promise.
This is the way.
I used to "hate" Wayland, but that was because I was stuck on an ancient kwin_wayland implementation that didn't get updated for years on Ubuntu.
When it comes to big changes like Wayland and Pipewire, you really want the latest versions you can get. Like the OP, I only use rolling releases on my machines for that reason.
Even as of Ubuntu 24.04 there's still plenty of stuff that's just broken. Can't stream my screen from Discord, can't share my screen from Firefox. Weird color problems with my calibrated monitor. Switching to Xorg solved all of these issues.
I'm open to moving to Debian testing/unstable if Wayland can actually deliver. What do you run?
My CachyOS / KDE install with pure Wayland has been buttery smooth and recently got an update that finally lets me calibrate the max brightness of my HDR OLED monitor (which was the monitor's fault. Not even Windows could make it work properly for non-games until now). CachyOS is also the first distro I've used in years that does things close enough to way I like out of the box that I haven't bothered to update my system reinstall script in months.
I've also been giving Bazzite to some non-tech people who have not once asked for help. That one is immutable and Wayland only, so it's a further testament to how far Wayland has come if you're on an up-to-date-enough system.
Sadly, I'm stuck on older Ubuntu for my work laptop because the mandated security software won't run on anything better.
What is the mandated security software? I’m asking to avoid it in my own org.
I've had to patch the .desktop file in debian for killbots to make it start via x11 because the wayland one was unplayable.
> you really want the latest versions you can get
> Even as of Ubuntu 24.04
I get that this is the current LTS release, but clearly this isn't want the parent poster had in mind. Notably 24.04 never shipped Plasma 6, which carried a lot of critical Wayland fixes.
Yeah, I wouldn't even bother with Wayland on Ubuntu unless it works out of the box.
I'm on an unholy amalgamation of Arch/Cachy/Endeavour now, but I have been using screen sharing nearly everyday on calls via Firefox on Arch for about a year and it's worked without a problem.
I considered Debian testing, and it does work well on servers, but a true rolling release is more convenient. The software release and update loop is shorter, it's really nice to be able to pull fixes to packages in a reasonable amount of time after they're released.
Debian testing (without security updates from unstable) is a no-go for servers as you are getting no timely security updates. https://wiki.debian.org/DebianTesting#Considerations
None of this is a problem on Debian stable. I even run discord as a Flatpak - screen share works fine. I believe there's systems for that now (way pipe? xdg stuff?)
Ubuntu 24.04 is older than Debian stable currently.
Try Fedora 42 KDE (or its atomic equivalent Kinoite). It works very well.
Regardless, that's still a huge Linux usability issue when the user needs to know for sure the specific source to install a friggin web browser where screen sharing works.
Indeed, though not so much Linux but rather a Ubuntu-specific issue. Most (all?) other distributions don't distribute Firefox as a Snap, so screen sharing will work out of the box.
Oh I absolutely agree. I've seen way too many people fall into this trap, install Firefox from Snap, Zoom client from Snap, what could possibly go wrong? Turns out, quite a lot!
I’m in Arch and I generally struggle to get video acceleration in a browser with an Nvidia GPU.
Wayland + KDE in Arch has worked seamlessly with NVDIA GPUs for a couple of years now.
That’s what you’ve heard or been your experience?
I’m sitting with SW acceleration in the browser today because some update broke it. I have had it working in the past but I’ve had like 2-3 updates in the past 2 years break it.
And for what it’s worth there was a really bad tearing bug because of the argument over implicit and explicit synchronization that neither side wanted to fix. I think it’s since been addressed but that was only like in the past 6 months or something. So it’s definitely not been “years” since it’s been working seamlessly. Things break at a greater rate than X because X basically is frozen and isn’t getting updates.
That has been by experience. Browser seems fine for me.
With Arch, you have to read up ahead of time before updating software because it's a rolling release.
I remember one breaking change when I was switching from the previous Nvidia drivers to the new 'open' ones, but some breakge was expected with that change.
Yeah, except when it bugs out. Mentioned some things to try to in another comment. I'd be surprised if it was just me seeing these issues...
With an older GPU, things aren't smooth I believe.
So it might make sense to avoid Wayland in that case.
I’m on a 2080.
That's the first supported generation for the official open source drivers.
Make sure you have switched over instead of using the old proprietary one.
Yup, same, using X, everything mostly works. Wayland - not so much.
And yet i tried setting up manjaro to see what all the fuss was about with arch based systems. In less than ten minutes i understood the origin of all the krashes memes.
I've been running debian stable (with backports) as my desktop for a couple of years now, I find that KDE is updated enough, and wayland is stable enough (on my hardware, of course, a 13 year old macbook and a 8 year old NUC).. honestly, as a simple user, i haven't appreciated any difference between X and wayland sessions, so i just login into wayland.
Stability is just as valuable for a workstation as it is for a server.
> What may be broken in one release will very likely be fixed a few releases after the last debian stable release
This joke is being told since 20 years. If it is fixed in KDE5, why do they need KDE6 ?
It wasn't fixed in plasma 5 and it is in Plasma 6.
Software doesn't magically become "stale" by itself. It's deliberately broken by PEOPLE who DECIDE TO BREAK IT.
> evolve rapidly
... um, okay, that's true, although in the last 10+ years it did not "rapidly" reach stability
While I appreciate all the folks singing our praises, as an upstream developer I think you deserve a better response than "you are holding it wrong" :)
We think that the Wayland session currently is the better choice for the majority of our users. It's more stable and polished, performs better on average, and has more features and enables more hardware.
That said there are still gaps in the experience, and non-latin input is one of them. In principle it's roughly on par witu X11, but it's still pretty crap, e.g. at the moment we give you a choice between having a virtual keyboard and having something like ibus active, when many users want both at the same time, and a lot of the non-latin setup stuff is still not part of the core product, which is very user-unfriendly.
The KDE Linux alpha in particular will definitely and unequivocably not serve you well as it currently doesn't ship any ibus/fcitx.
The good news is that this is something we are very actively working on in the background right now. We have an annual community-wide goal election process that made Input improvements one of our goals, and the topic has been all over our developer conference this year.
Can I do something like `wmctrl -xa terminator || exec terminator` yet?
Another huge gap is accessibility. No wayland compositor has managed to implement screen reader support that works with existing applications yet. And no, GNOME's wayland compositor did not achieve this. In typical GNOME fashion they threw away all support for existing screen readers and accessibility and invented two entirely new GNOME only protocols that no software except theirs supports.
I'm in a similar boat - i tried the Wayland session in Debian 10 and 11 and lasted less than a day; in Debian 12 i toughed it out for about a week before hitting a showstopper; but this time in Debian 13 i've used it since release without a single nit to pick.
Has any distro ever promised that there are zero bugs in the software they use? I don’t particularly like Wayland but a lot of people have been using it for years at this point…
User adoption is not really a great metric when it ships as a default on common distros. Most people would rather deal with issues and wait for support than fix things in an unsupported way.
If it wasn't a default, it'd go back to barely being used.
If it's really broken, they can't get away with setting it default.
Microsoft/Google would like to have a word with you.
Yep lol my experience on Windows 11 is that when opening a laptop, there's a realistic chance the taskbar will hang and have to restart itself (which takes a surprisingly long time)
This is my reality too and the taskbar takes some stuff down with it.
Also the taskbar is just broken in general. It'll pull tons of apps behind the '...' button even though there's plenty of room on the taskbar and it'll also put fake apps that aren't actually open on the taskbar.
Also no vertical task bar. Come on Microsoft.
Or the last 27 years of audio on Linux.
I agree that it isn’t a great metric for, like, how good the desktop environments are in some overall sense. I’m just saying it has enough users that it isn’t some niche thing where a ton of bugs can easily hide.
I think "most" are fixed. I use quotes because I've seen people say they have issues that I have never run into myself.
I'm currently stuck on Windows for some old school .NET work, but otherwise have been running Wayland on either arch or fedora for 8 or so years, no real problems specific to Wayland. With that said, I've also always had X to fall back to for the odd program that absolutely only worked in an X session. At this point, though, I don't even recall what they were (probably something that didn't like running under Swaywm because wlroots), so even that might not be an issue.
When was the last time you tried Wayland? I switched to KDE Plasma a couple years ago not knowing anything about display server protocols and haven't had a single issue.
The last time I tried it extensively was on Debian Bookwork (12.1 and later; I always wait for the first point release), released July 2023 but freezing sometime around February 2023.
Yes, this was a while ago now. But just as now, people said then "all the bugs are fixed and missing features added"; all that really means is "we're in the long tail". I might've put up with it if not for the fact that there were 2ish major bugs that directly affected my main workflow (e.g. temporarily swapping to non-Latin text input).
Same experience. I switched back to Linux a few months ago after a few years hiatus. Installed Arch and KDE Plasma. Literally didn’t even know I was using Wayland until I had to fiddle with something and realized X wasn’t even installed
I'm using Wayland exclusively for 4 years now on Archlinux (may be more, I forget). At this point, it is better than X11. It still has bugs, but then so does X11.
Fractional scaling is fixed in Plasma 6 though. So, if you need that, it has been good for 1 year now.
i dont want to call linux old fashioned but to still be working the kinks out of windowing system in 2025 boggles me... its almost as if there's a resistance to GUIs or something.
The window manager in windows is horribly buggy and extremely slow. Lots of animation flickering and snapping for seemingly no reason. Try maximizing Firefox while a video is playing and watch the animation - or, usually, lack thereof.
Wayland is, by far, the best windowing system I've ever used. No dropped frames, ever. Its actually kind of uncanny, it feels like you're using an iPhone.
GUIs are tough in open source because they need way more than just code. You need UX designers, testers, feedback loops, and infrastructure — stuff most volunteer projects can’t sustain. That’s why devs can dogfood CLI tools but GUIs are a whole different beast.
Even *Windows* and *macOS* struggle with this — just look at how messy *fractional scaling* is [link](https://devblogs.microsoft.com/oldnewthing/20221025-00/?p=10...).
And yet, *Linux/KDE* has been pushing GUI innovation for decades. Apple and Microsoft have copied so many KDE features it’s hard to keep track.
Wayland works great for me. I use a rolling update distribution so everything is the latest version and I only use Firefox, a terminal, and emacs. Debian tends to be pretty far behind.
Are all X11 bugs fixed?
I haven't hit any for probably a decade now.
Bugs in the window manager or shell (both shipped by KDE) are somewhat more common, but even if they are crashes, due to X11 being better-designed for isolated faults they are easily recovered-from without loss of session.
X11 not supporting modern display technologies is arguably a bug, and it's not likely to get resolved at this point (e.g. it can't do mixed DPIs, or VRR with multiple displays, or HDR in general).
I don't care about any of those things, since computers are about productivity for me.
But I'm pretty sure at least half of them actually do work under X11, it's just that some UI libraries refuse to use it on the grounds of "X11 is outdated, I won't support features even though it does".
(also, having played around with DPI stuff on Wayland, it's pretty broken there in practice)
Well and others do care, and no, bunch of stuff straight up doesn't work on Xorg, or is jank fest.
Yep. I feel the same about all the various wayland compositors. Even 15 years on none of them have managed to implement accessibility support for existing linux applications. No screen readers support on any but GNOME's compositor and that doesn't work with existing applications; GNOME invented 2 new incompatible protocols that only their compositor works with (which doesn't work with existing applications).
No HDR or high DPI is an annoyance. Not supporting accessibility is real deal breaker. Especially for commercial settings where things like Americans with Disability Act compliance matters. And even more for me with my retinas slowly tearing apart and losing my eyesight: the entire waylands ecosystem is extremely inconsistent and buggy.
>I don't care about any of those things, since computers are about productivity for me.
I guarantee you spend more time "configuring" linux than actually being "productive" with it.
> I don't care about any of those things, since computers are about productivity for me.
All of those are productivity things
I guess we’d have to see what the argument is. But, that looks more like a lack of features to me.
I suspect X11 can do mixed DPI and VRR with multiple displays if you do 1 display per xscreen, but nobody uses that configuration.
I suspect HDR support could be added if someone were to retrofit it like how VR support was added, but no one really wants to work on that.
I "suspect" all of the bugs that the parent comment complained about could be fixed too, but that wasn't the question.
No. HDR will never come to X11. This is because the X protocol defines a pixel as a CARD32, that is an unsigned 32-bit integer. So the highest depth you could theoretically go is R10G10B10, and forget about floating-point HDR. Fixing this would require rewriting the entire protocol. Which has effectively already been done; it's called Wayland.
Perhaps people ought to listen to the Xorg devs when they say X11 is broken and obsolete, and you should be using Wayland instead. Every single one of them says this.
All sorts of things in X11 are "defined" as a particular thing in the base standard, then changed in protocol extensions. You really shouldn't be writing raw pixels anyway (and most people don't since breaks if your monitor is using 8-bit or 16-bit, for example).
> You really shouldn't be writing raw pixels anyway (and most people don't since breaks if your monitor is using 8-bit or 16-bit, for example).
what are you talking about?
X11 supports all sorts of obsolete pixel formats, including 1bpp mono, 4bpp and 8bpp indexed color, and 16bpp "high color" modes. In order to display an image in X11, you need to understand the pixel size, organization, and how colors are mapped to pixel values (all of which are available in a data structure called a visual).
Amazing standard you got there.
To be fair, the same is very true with Wayland - you can't do much without extensions.
> So the highest depth you could theoretically go is R10G10B10, and forget about floating-point HDR.
R10G10B10 matches most HDR displays out there, AFAIK even Windows uses that outside of DirectX (where FP16 is used).
But beyond that...
> Fixing this would require rewriting the entire protocol.
...most of the protocol does not have to do with color values. X11 is extensible and an extension can be used that allows alternative functions that use more advanced color values where that'd make sense. For example, assuming you want to use "full" range color values for the drawing functions like XFillPolygon, etc, you'd want to add have the extended range state in graphics contexts, introduce extended commands for changing it (with the existing commands simulating an SDR color for backwards compatibility). That is assuming R10G10B10 is not enough of course (though because for decades many applications assumed 8bit RGB, it is a good idea to do sRGB/SDR simulation for existing APIs and clients regardless of the real underlying mode of the monitor unless a client either opts-in to using extended color or uses the new APIs).
Another thing to keep in mind is that these are really needed if you want to use the draw primitives using extended color / HDR. However most HDR output, at least currently, is either done using some other API (e.g. Vulkan) or via raw pixel data. In which case you need to configure the window output (a window region, to allow for apps with mixed color spaces in a single window - e.g. think Firefox showing a SDR page with an HDR image) to use a specific color space/format and then rely on other APIs for the actual pixel data.
This is something i wanted to look into for a while now, unfortunately other stuff always end up having more priority - and well, my "HDR" monitor is only HDR in name, it barely looks any different when i try to enable HDR mode in KDE Plasma under Wayland for example :-P. I do plan on getting an HDR OLED monitor at some point though and since i do not plan on changing my X11-based environment, i might take a look at it in the future.
Again. This is a thing the xorg devs have already looked at. Their conclusion? "Nope. Too much work. Just use Wayland."
Once again, every... last... one of the Xorg devs is of the opinion that you should be using Wayland instead. Even if you had changes to propose to Xorg, they will not make it into a release. If you insist on soldiering on with X, your best bet is probably to contribute to Wayback, likely to be the only supported X11 display server in the near future, and see if you can add a protocol to the compositor to allow "overlay" of an HDR image displayed using Wayland "on top of" an X window that wants to do HDR.
But really, consider switching to Wayland.
I wish it was that easy to switch to Wayland, but I have always ran into serious issues. Granted it has been a year since I last tried, so who knows.
I use X11 features such as highlight to copy and then using middle mouse button and/or Shift-Insert to paste its contents (just to mention one), and I use xclip extensively to copy contents of files (and stdin) to it. I use scrot, I use many other applications specifically made for Xorg, and so forth. I have a custom Xorg config as well which may or may not work with Wayland.
Thus, I do not think I could realistically switch to Wayland.
> and I use xclip extensively to copy contents of files (and stdin) to it.
I won't say anything against your other points (and in fact I am typing this comment on Xorg because I have my own list of reasons), but https://github.com/bugaevc/wl-clipboard is almost drop-in for xclip/xsel.
> Again. This is a thing the xorg devs have already looked at. Their conclusion? "Nope. Too much work. Just use Wayland."
My comment isn't about how much work something would need, but about how it can be done.
> Once again, every... last... one of the Xorg devs is of the opinion that you should be using Wayland instead.
Good for them, but i have my own opinions.
> Even if you had changes to propose to Xorg, they will not make it into a release.
Maybe or maybe not. AFAICT the official stance has been that nobody wanted to work on these things, not that they are against it, they just do not want to do it themselves.
But if they do not make it into a release, there is also the XLibre fork or there might be other forks in the future, it isn't like Xorg is some sort of proprietary product. I'd rather stick with Xorg as it seems more professional but ultimately whatever works.
> see if you can add a protocol to the compositor to allow "overlay" of an HDR image displayed using Wayland "on top of" an X window that wants to do HDR.
TBH this sounds like an incredibly ugly and fragile hack. There are two main uses for HDR support: embedded HDR (e.g. in a firefox window) and fullscreen HDR (e.g. for videos/games). For the latter there is no point in an overlay, just give the server the full screen. For the former such an overlay will require awful workarounds when you want more than just a self-contained rectangle, e.g. you want it clipped (partially visible image) or need it to be mixed with the underlying contents (think of a non-square HDR shape that blends into SDR text beneath or wrapped around it).
From a pragmatic perspective the best approach would be to see how toolkits, etc, use HDR support in Wayland and implement something similar under X11/Xorg to make supporting both of them easy.
> But really, consider switching to Wayland.
I've been using Window Maker for decades and have no interest in something else. Honestly i think that adding Wayland support to Window Maker or making a Window Maker-like Wayland compositor are both more of an effort and harder than adding HDR support to Xorg. Also i am sometimes trying KDE Plasma Wayland for various things and i have several programs having small but annoying issues under Wayland.
That said, from a practical perspective, one can use both. The only use for HDR i can think of right now is games and videos and i can keep using my Xorg-based setup for everything while switching to another virtual terminal running KDE Plasma Wayland for games/videos that i want to see in HDR. Pressing Ctrl+Alt+Fn to switch virtual terminal isn't any different than pressing Win+n to switch virtual desktop.
Sure except Wayland doesn't work with nvidia so...
It works these days.
> X11 not supporting modern display technologies is arguably a bug,
X maintainers said it is a feature they do not want to implement. Because "we work on Wayland now, Wayland better".
You're free to submit patches and features to X.org.
Not really, that is OP's point. Xorg maintainers don't really want to enhance X11 and add new features, only critical bug fixes. That is one of the reason there are now X11 forks like Xlibre.
Ultimately it doesn't matter now, because Xorg is kind of in a state of "active abandonment", that is to say, the only maintenance being done is to ensure that no more bugs are being fixed aside from critical security issues on distros Red Hat still supports. In open source, you go where the developer energy is, and right now that's Wayland.
If you're about to tell me that XLibre is a viable alternative, no you're not because it isn't.
I've been using KDE since before Wayland was a twinkle in RedHat's eye, so trust me when I say that Wayland has always come across as an afterthought from KDE. I'm not saying it was, but given all the issues KDE users have had with Wayland over the years it sure looked that way. If somebody I loved was having trouble with KDE the first thing I'd ask is if they had accidentally switched to Wayland (usually because of an upgrade). The majority of the time they'd check, sigh, and say yes. Switching back their problems would go away.
Reading this thread makes me want to try KDE/Wayland again, so probably on my next install I'll give it another shot. If it's still crap I think it's time to switch off of KDE.
I recently switched to hyprland but before that I was running a mixed HDR/SDR, mixed VRR/no VRR, mixed refresh rate setup with icc profiles applied under KDE wayland. No issues here tbh.
I installed Debian 13 on my laptop from 2014. It's got an NVIDIA K1100M. The latest proprietary driver supporting it is 390 which is not supported by Debian 13. It was by Debian 11. I skipped 12. I run Nouveau and Wayland and everything that didn't work with Wayland on Debian 11 works now, with one unfortunate exception: backlight control is broken, which means that I'm stuck with 100% brightness. That's probably a problem with the kernel or the driver because it happens with X11 too.
X11 has a workaround for that because I can use gamma correction to simulate brightness control and make it work with night light. There was no way to do it in Wayland: they stomp on each other and undo whatever the other software did. So I'm back to X11 and frankly I don't notice any difference.
If you have more luck with your graphic card you'll be probably OK with Wayland. Anyway the X11 session is there, logout from Wayland and login using X11.
> It's got an NVIDIA K1100M. The latest proprietary driver supporting it is 390 which is not supported by Debian 13. It was by Debian 11.
Tell me more, please.
Does it only have an nVidia or is it dual GPU and switching?
Because I have the latter and the lack of GPU drivers is keeping me on Ubuntu 22.04.
Is it possible you're just using the Intel GPU and your nVidia is inactive?
I'm still dual booting. Debian 11 to work and Debian 13 to finish setting up everything.
With Debian 11, kernel 5.10.0-35-amd64
I was sure that I was using the NVIDIA driver 390 but I run dpkg -l before replying to you and I found out that actually I'm running the 470.256.02 driver. I definitely run the NVIDIA card because NVIDIA X Server Settings is telling me that X Screen 0 is on "Quadro K1100M (GPU 0)". I see it also in /var/log/messages and
cpuinfo reports that my CPU is an i7-4700MQ CPU @ 2.40GHz which according to the Intel web site has an internal Intel® HD Graphics 4600. I think that I never used it. NVIDIA X Server Settings does not report it but it's a NVIDIA program so I would not be surprised if it does not see it. Anyway, the kernel module for Intel should be i915 and it's not there. Maybe I have to load it but I'm phasing out this version of the OS. I'm pretty sure I never installed anything to switch between the two GPUs. There used to be something called bumblebee. Is that what you are using now?Apparently I can install the 470 driver in Debian 13 https://forums.debian.net/viewtopic.php?t=163756 but it's from the unstable distribution and if Nouveau works I'm fine with that. I'm afraid that the NVIDIA driver and Wayland won't mix well even on 13 so I'll be on X11 anyway.
Very interesting. Thanks a lot for this! I will experiment and see if I can get it working.
I use older Thinkpads with Optimus switching, so using the Intel GPU is not opotional: it is always on, but the OS offloads GPU-intensive stuff to the nVidia GPU.
In my testing with Debian 12, I could not get my nVidia chips recognised at all. In some distros, this has the side-effect of disabling the Displayport output, which is a deal-breaker as I use an external monitor and the machines do not have HDMI.
I just pointing this out in the other comment. So no, bugs are still there, plenty of system level graphics glitches in all but most trivial circumstances.
Jank and glitches. Jank and glitches.
Wayland / KDE has been my main driver for a year on Void linux. IIRC it requires some tweaks at the start, it has worked without problems since then.
I use the current stable with KDE Plasma over X11, there's nothing forcing you to use Wayland in Trixie.
There are still numerous little issues.
I hope it also means they've managed to do what no wayland compositor has managed in the past 15 years: have working accessibility (screen reader support, etc) that works with existing applications. Otherwise this is just another toy/demo distro.
And no, gnome's wayland compositor did not achieve it either. They threw away all accessibility support and then invented two new gnome-only protocols for it that no software except gnomes own compositor supports.
Call me when I can run Wayland and share my full screen on M$ Teams. Last time I checked it was just individual windows.
Cross that hurdle and I can go back to trusting the Linux Desktop for business things.
Works fine in current KDE master branch, and it's been working for quite a while so it should be in the current release. Note that I run Teams in MS Edge for Linux, which is my dedicated Teams runtime environment and sandbox.
The only time I've ever had screensharing working correctly is under Wayland
I use it via Chromium. Are there additional features in the Electron version?
There is no electron version of MS Teams on Linux (any more). Thanks Microsoft!
I wish them the best of luck. I never used Neon since it was a rolling release distro. This one I also won't be using because it immutable and relies on Flatpaks which are very buggy. Standalone binaries or AppImages are fine with me but Flatpaks and Snaps are garbage.
Not only is Arch also a rolling distro (despite them saying "not Arch!"), Arch is one of the most horrible rolling distros in terms of stability. Their general principle for package breakage is "you should have checked it on our (site) release log". They don't throw an error or a warning, if something is a breaking change and you pull it into your system, you basically get a "hehe should have checked the release log", and you're hosed.
If you want a good, actually professional rolling release, use SUSE Tumbleweed. They test packages more thoroughly, and they actually hold back breaking or buggy changes instead of the "lol read log and get fucked" policy.
This is a misunderstanding from users POV. Something that a lot of people have.
Arch is a DO IT YOURSELF distro. They write that thing everywhere they can. The stability of the installation is literally ON YOU! Your responsibility as a DO IT YOURSELF distro user. They didn't trick you into it or something.
Expecting Arch linux to spoon feed is like expecting IKEA to give you assembled furniture.
You should use openSUSE or other "managed" rolling release distros. Arch IS NOT A "managed" rolling release distro.
https://www.unsungnovelty.org/posts/01/2024/a-linux-distro-r...
Then they probably shouldn't ship it with a package manager.
My installation is now 6 years old. Never had any point release distros that long. Stability is subjective to hardware for starters. And secondly, Arch is a DIY. Do not use it if you can't get it to work for your use cases. We have 300+ distros to choose from. I am just politely telling you that your expectation of you wanting Arch to take care of your installation was never a promise from the project.
It's software. It will work the way it is written. As simple as that.
That doesn't follow; DIY is a spectrum. It can be perfectly reasonable for a DIY distro to ship a package manager, just as it can be reasonable for it to run on existing hardware instead of expecting you to break out the soldering iron.
Anecdote: 12 years with Arch, including a laptop with 9 years on one install. Zero issues. But yeah, there’s a low volume mailing list. Get on it. Read it, it’s very short and to the point, and it’s only a few times per year.
Are you talking about Arch-announce? (https://lists.archlinux.org/archives/list/arch-announce@list...)
I am new to Arch and would like the notifications that you are talking about.
Yes, and also https://archlinux.org/feeds/news/
Very uncharitable perspective on people that do the work for free. I can understand not wanting to use a distribution where breakages can happen, but being a dick about it less so.
To be fair to Arch, you can always subscribe to their RSS or mailing list if you want to be notified about breaking changes
> Arch is one of the most horrible rolling distros
We've had different experiences. I've been using Arch for about 8 years and have had to scour the forums no more than thrice to find the magic incantations to fix a broken package manager. In all cases, the system was saved without a reinstall. However, it is certainly painful when pacman breaks.
;-)Thats a very different experience from me. I've had quite a few broken packages easily over 10 in the last year and a half. It was easy enough to find them and roll them back but I dont know how people can say arch is stable. Do you update regularly?
> Do you update regularly?
Sometimes once a month, sometimes once a week, sometimes more if there's a critical CVE.
I don't want to manually have to scroll through all the release logs on every single upgrade, in case their might be a landmine in there this time. Nor does any rational person that values their time or their system stability.
It is a million times more sane to have a package manager throw a warning or an error when a breaking change is about to be applied, rather than just YOLO the breaking change and pray people read the release log.
It is one of the most stupid policies ever, and the main reason why I will steer everyone away from Arch forever. Once bitten, twice shy.
I've been using Arch Linux for over a decade and have literally never once consulted release logs, and never got into any serious trouble.
I do subscribe to the arch-announce mailing list which warns of breaking changes, but that receives around 10 messages per year, and the vast majority aren't actually all that important.
I've also gone multiple months between updates and didn't have any problems there either.
The idea that Arch Linux breaks all the time is just complete nonsense.
His point is that Arch will break the system without any warning during package upgrade.
A warning will dissuade users from upgrading their system instead of doing the manual intervention.
By way of example, Gentoo's `eselect news` is pretty good https://wiki.gentoo.org/wiki/Eselect#News
That’s three times too many. I have been running an Ubuntu server at home for 10 years and went through probably 4 LTS releases and the number of times apt flaked out on me - exactly zero.
I'm running Ubuntu 24.10 and they broke the upgrade to 25.04 if you're using ZFS on the boot drive. Their solution was to prevent the upgrade from running, and basically leave behind anyone stuck on 24.10 to figure it out for themselves.
TBF, they can't be expected to support every potential configuration users may think of.
If they weren't going to support the feature why did they provide it as an option on the installer without any warnings or disclaimers? This isn't some bespoke feature that I hacked together, it's part of the official installer. If I had known it wasn't fully supported then I wouldn't have used it.
So not rolling? I too have never had to open Windows Task Manager on macOS.
Actual Arch on two machines, no issues. The older one I've been using for 15 years now.
YMMV. Manjaro's broken on me multiple times. I leave a machine alone for two years and it's next upgrade is almost guaranteed to break something.
Manjaro is not Arch, and its maintainers have repeatedly shown that they aren't very good at maintaining a distro: https://github.com/arindas/manjarno
This is revisionist at best. Manjaro has always been portrayed as Arch with a GUI by both sets of maintainers.
Maybe by Manjaro's maintainers, but certainly not by Arch's. I've been using Arch for a little over a decade. The position that I've always seen in the official IRC channel is that forks such as Manjaro are explicitly not Arch.
Here's one of the oldest versions of the "Arch-based distributions" page on the wiki. It has a notice at the top that says that forks are not supported by the community or developers: https://wiki.archlinux.org/index.php?title=Arch-based_distri...
Two years with no uptes on rolling release is not a good idea. Two years with no updates for anything not connected to the internet is not a good idea.
I didn't say anything about the machine being on the internet persistently. It's a laptop sitting in storage mostly. The updates are for when it comes out of hiding.
Arch doesn't support more than 6(?) months between upgrades, maybe Manjaro is the same.
I guess my only option is to switch to a more stable distro such as Debian or SUSE. Manjaro has always been touted as a very light distro, good for old machines, but its instability makes it a no-go.
The choice of distro makes almost no difference w.r.t. performance on old hardware, as long as it's still supported.
The only (real but small) difference is between desktop environments and their choice of default apps (eg. file manager).
Gentoo is very stable in my experience and you get to choose exactly which packages you want to be unstable vs stable (the default).
You know you don't have to update it daily?
I swore off arch when an update surprised me by switching to systemd (years ago obviously) and trashing my system in the process
> SUSE Tumbleweed > They test packages more thoroughly, and they actually hold back breaking or buggy changes instead of the "lol read log and get fucked" policy.
I am currently on Arch specifically because Tumbleweed shipped a broken Firefox build and refused to ship the fixed version for a month.
As a workaround I uninstalled the bundled firefox and replaced it with flatpak. And on next system update the bundled Firefox was back because for some strange reason packages on suse are bundled.
Why is a comment trashing a different project, in the most lazy way possible, at the top of the page?
EDIT: wow, all the comments are like that. I guess something has to come first.
There has been an increasing trend in the use of up votes as likes instead of user moderation which results in worthwhile discussion sinking to the bottom and stuff like this being at the top and setting the general tone of the discussion.
I never got neon to work in a way that wasn't unpleasant.
I love neon, so it is a tie.
Neon is explicitly a bleeding edge KDE testbed (but I'll agree that their website undersells this fact)
Neon has 2 flavors, developer and user. The user one is stable.
Flatpak is the new systemd I guess.
Without being too negative, I'd like to point out that Neon, ElementaryOS etc tried the same thing. A project thinks we need our own distro but ends up pulling resources away from improving the desktop environment itself.
GNOME doesn’t maintain Ubuntu or Fedora, but it still dominates the Linux desktop experience.
Gnome has its own distribution called Gnome OS. It’s based on Fedora Rawhide.
It actually looks a lot what KDE is shipping here except Gnome provides it as a reference system for their developers at the moment but it’s totally usable as a user if you want to.
> It actually looks a lot what KDE is shipping here
No, it does not, in any way whatsoever.
GNOME OS does not have dual root partitions, Btrfs with snapshots and rollback, atomic OS updates, or any of the other resilience features which are the unique selling points of KDE Linux.
In case you are unfamiliar with the design of KDE Linux, I have described it in some depth:
https://www.theregister.com/2025/08/04/kde_linux_prealpha/
And I compared it and GNOME OS here:
https://www.theregister.com/2024/11/29/kde_and_gnome_distros...
Your data is out of date. Gnome OS these days uses A/B updates with dual read-only /usr partitions and verified boot in the mold of https://0pointer.net/blog/fitting-everything-together.html.
> Gnome OS these days uses A/B updates with dual read-only /usr partitions and verified boot in the mold of https://0pointer.net/blog/fitting-everything-together.html.
Hang on. I have to say [[citation needed]] on this.
I write about systemd regularly, and read Lennart's blog and Mastodon feed. As evidence, I did an in-depth on systemd 258 just a month or so ago:
https://www.theregister.com/2025/07/25/systemd_258_first_rc_...
I do not personally use GNOME or GNOME Boxes and I've never managed to get GNOME OS to so much as boot successfully in a hypervisor or on bare metal, and I've tried many times.
But I don't think it adopts all these fancy features yet.
ParticleOS does:
https://github.com/systemd/particleos
But that's a separate distro. It's not GNOME OS. It's the testbed for the "fitting everything together" concepts.
Adrian Vovk's CarbonOS did much of this:
https://carbon.sh/
... but it's dormant now. He wants to turn GNOME OS into something like that, as he has said:
https://blogs.gnome.org/adrianvovk/2024/10/25/a-desktop-for-...
And I have written about:
https://www.theregister.com/2024/11/29/kde_and_gnome_distros...
I am not aware it has happened yet, though.
It was a relatively recent change [1]. Try the latest Gnome OS nightly ISO in a VM -- you'll see that they've (largely) implemented the partition scheme suggested in ParticleOS: root on btrfs, two partitions for /usr backed by dm-verity, new /usr images delivered using "systemd-sysupdate".
[1]https://www.osnews.com/story/139696/gnome-os-is-switching-fr...
Very interesting. Thanks.
I will indeed have a look, ASAP -- but I hope this version is a little more tolerant of non-GNOME/non-RH hypervisors, or I won't get far...
> No, it does not, in any way whatsoever.
You mean apart from the fact that they are both immutable OS allowing the use of flatpack for software distribution?
Because from where I stand they have a lot more in common than different.
It's likely it's different people. It's volunteers mostly, they can do whatever they want.
The article already talks about Neon and the pros/cons of running that kind of distro in more detail than pointed out here.
> GNOME doesn’t maintain Ubuntu or Fedora
What differentiates GNOME from KDE in that regard (other than it'd be Kubuntu and the Fedora KDE spin from the other perspective)?
Yes, the key difference is GNOME has strong downstream partners that treat it as the default (e.g. Fedora Workstation, Ubuntu). This way GNOME gets a lot of testing, polish, and feedback without having to maintain its own dist.
I guess I'm confused on what the difference between "being the most popular Linux DE" and "being the default DE of the most popular Linux distros" is. Other than "already being most popular", what was/is KDE's partnership with these distros lacking that GNOME wasn't/isn't? Since this all happened 10-20 years prior to either Neon or KDE Linux, and KDE has long had these kinds of partnerships, I'm assuming there is some other reason/thing KDE you think KDE should be looking at.
Adding on from this new comment: Given whatever differences you see for GNOME in the above, why do you think GNOME has maintained its own testing OS for the last 5 years despite this?
> I guess I'm confused on what the difference between "being the most popular Linux DE" and "being the default DE of the most popular Linux distros" is.
You put the things in quotation marks but I do not see these phrases in the thing to which you're commenting.
KDE is roughly a year older than GNOME.
Snag: KDE was built in C++ using the semi-proprietary (dual-licensed) Qt. Red Hat refused to bundle Qt. Instead, it was a primary sponsor of GNOME, written in plain old C not C++ and using the GIMP's Gtk instead of Qt.
This fostered the development of Mandrake: Red Hat Linux with built in KDE.
In the late 1990s and the noughties, KDE was the default desktop of most leading Linux distros: SUSE Linux Pro, Mandrake, Corel LinuxOS, Caldera OpenLinux, etc. Most of them cost money.
In 2003, Novell bought SUSE and GNOME developer Ximian and merged them, and SUSE started to become a GNOME distro.
Then in 2004 along came Ubuntu: an easy desktop distro that was entirely free of charge. It came with GNOME 2.
Around the same time, Red Hat discontinued its free Red Hat Linux and replaced it with the paid-for Red Hat Enterprise Linux and the free, unsupported Fedora Core. Fedora also used GNOME 2.
GNOME became the default desktop of most Linuxes. Ubuntu, SUSE, Fedora, RHEL, CentOS, Debian, even OpenSolaris, you got GNOME, possibly unless you asked for something else.
KDE became an alternative choice. It still is. A bunch of smaller community distros default to KDE, including PC LinuxOS, OpenMandriva, Mageia... but the bigger players all default to GNOME.
Many of the developers of GNOME still work for Red Hat today, over 25 years on. They are on the same teams as the developers of RHEL and Fedora. This is a good reason for GNOME OS to use a Fedora basis.
> Around the same time, Red Hat discontinued its free Red Hat Linux and replaced it with the paid-for Red Hat Enterprise Linux and the free, unsupported Fedora Core.
This is a common misconception. RHEL and RHL co-existed for a bit. The first two releases of RHEL (2.1 and 3) were based on RHL releases (7.2 and 9). What was going to be RHL 10 was rebranded and released as Fedora Core 1. Subsequent RHEL releases were then based on Fedora Core, and later Fedora.
https://docs.fedoraproject.org/en-US/quick-docs/fedora-and-r...
IMHO a summary a few paragraphs long of a decade of events in a complex industry must simplify matters.
Sure, there was overlap. Lots of overlap. You highlight one. Novell bought SUSE, but that was after Cambridge Technology Partners (IIRC) bought Novell, and after that, then Attachmate bought the result...
But you skip over that.
I think as a compressed timeline summary, mine was fair enough.
It is really important historical contact that KDE is the reason that both Mandrake and GNOME exist, and it's rarely mentioned now. Mandrake became Mandriva then died, but the distros live on and PC LinuxOS in particular shows how things should have gone if there was less Not-Invented-Here Syndrome.
I don't think "well, actually, this happened before that" is as important, TBH.
No?
> You put the things in quotation marks but I do not see these phrases in the thing to which you're commenting.
Quotes are overloaded in that they are used for more than direct citation. In this case: to separate the "phrase" from "the sentence talking about it" (aka mention distinction - as used here as well). "s are also seen in aliases, scare quotes, highlighting of jargon, separating internal monologue, and more. If it doesn't seem to be a citation it probably wasn't meant to be one. On HN, ">" seems to be the most common way to signal a literal citation of something said.
This is a fair enough, even more detailed, summary of the history, but I'm still at a loss for stitching this history to what KDE should be doing today. Similarly, for why this relationship results in good reasons for GNOME OS to exist but KDE Linux? E.g. are you saying KDE Linux should have been based on something like openSUSE (Plasma is the default there) instead of Arch, that they should have stuck to several more decades of not having a testing distro, or that they should do something completely different instead?
I don't use GNOME or KDE as my DE, so I genuinely don't know what GNOME might be doing that KDE should be doing instead (and vice versa) all that deeply. The history is good, but it's hard for me to weed out what should be applying from it today.
Or maybe I completely read to far into it and it was only a statement that GNOME has historically been more successful than KDE. It's known to happen to me :D.
I thought I spelled it out clearly.
Let me emphasise the executive summary:
1. KDE was first.
2. KDE used to enjoy significant corporate backing.
3. Because of some companies' actions, mergers and acquisitions, etc., other products gained ascendancy.
4. KDE is still widely used but no longer enjoys strong corporate backing.
5. Therefore KDE is going it alone and trying something technologically innovative with its showcase distro, because the existing distro vendors are not.
The KDE Linux section of this recent article of mine spells out my position more clearly:
https://www.theregister.com/2025/09/10/kde_linux_and_freebsd...
GNOME maintains GNOME OS.
A lot of the base for GNOME OS was also used for automated testing IIRC. I don't know if that is the case for KDE Linux (or Neon).
Fedora is a side gig for GNOME maintainers, same as Neon for KDE (:
Guess two side gigs make a full time project because KDE on Fedora provides a great experience.
Does immutability mean something like ChromeOS, where you cannot install packages on the system itself, but you can create containers on which you can freely install software, including GUI?
If yes, what are some good options for someone looking for a replacement to ChromeOS Flex on an old but decent laptop?
Yes, that's exactly what immutable means for Linux distros. Sometimes they're also called atomic for mostly the same reasons.
I don't have good experiences with snap and flatpak, so I hope this is not it.
Hey the reason behind my username!
To add something useful, OSes are the one area where reinventing the wheel leads to a lot of innovation.
It's a complete strip down and an opportunity to change or do things that previously had a lot of friction due to the amount of change that would occur.
What was Cartwheel Linux? A quick search doesn't turn up anything related.
What makes you say "the one area"? There are plenty of areas that have enough development friction / inertia such that the same principle applies. Even generally, I think the reason why people caution against reinventing the wheel isn't because it prevents innovation, but because it wastes time / incurs additional risk.
I agree with you. When I read that my first thought was "the one area"? Personally I think its the complete opposite, like really strongly. like really really strongly. I'm certain for at least 10 years now, once a week I think "I miss old desktop operating systems". Any of them. 7,vista,xp. snow leopard,leopard,tiger. I even stopped using Ubuntu when it went from Gnome 2 to Gnome 3 and other options at that time were pretty bad so I ended up getting back into mac's for my home desktop. I still use all 3 daily, but hate all of them.
> OSes are the one area where reinventing the wheel leads to a lot of innovation
To me, it seems like the opposite is true. Operating systems feel like a solved problem. What are some of the big innovations of recent times?
> Operating systems feel like a solved problem
Even desktop environment is not solved. I'm typing this from a relatively new metod of displaying windows - a scrolling window manager (e.g. Karousel [1] for KDE). It just piles new windows to the right and it infinitely scrolls horizontally. This seems like a minor feature but changes how you use the desktop entirely and required a lot of new features at operating system level to enable this. I wouldn't go back to a desktop without this.
The immutable systems like NixOS [2] have been an absolute game changer as well. Some parts are harder but having an ability to always roll back and the safety of immutability really make your professional environment so much easier to maintain and understand. No more secrets, not more "I set something for one project at system level and now years later I forgot and now something doesn't work".
I've been on linux desktop exclusively for almost 15 years now and it has never been as much fun as it is today!
1 - https://github.com/peterfajdiga/karousel
2 - https://nixos.org/
Nifty.
I've long wanted a scrollable/zoomable desktop, with a minimap that shows the overall layout. Think the UI of an RTS game, where instead of units you move around and resize windows. This seems like something in that direction, at least.
How does Karousel work with full screen applications, e.g., games?
Karousel knows when application wants to be fullscreen and allows it to take the screen. If you use hotkey for "move focus to left/right window" you can even exit fullscreen to see other programs. You can also force any program to fullscreen with a key. This is a pretty good workflow as you can fullscreen something and still keep the layout, just not visibly.
Am I the only one who thinks that DBus and XDG are causing a lot of problems?
I would love to see a complete overhaul of those.
In my opinion, if I type "xeyes" and it works (the app shows on my screen), then I should be able to start any other X11 application. However, gnome-terminal behaves differently. I don't know why precisely, but using dbus-launch sometimes works. It is a very annoying issue. A modern Linux desktop system feels like it's microservices connected by duct-tape, and sometimes it works, and sometimes it doesn't.
On the DE, we just struggle with polish. This is paradoxically both an issue of not enough fruitful innovation and not enough maturity of good innovations that happen and take forever to be adopted.
As far as the actual OS, the new sheaves and barns thing in Linux is neat. We need innovation in RAM compression and swapping to handle bursty desktop memory needs better.
The main problem, and the one I'm trying to solve, is that as a software engineer, you have little incentive to make something that millions of people will use on the Linux desktop unless you have some other downstream monetization plan. You will have tons of users who have questions, not code contributions. To enable users to better organize into their own support structures and to make non-code contributions, I'm building PrizeForge.
Not really, unless you rewrite the kernel too. Security in Linux needs a complete makeover, where applications are not trusted as they are now.
> applications are not trusted as they are now.
Agreed, but...
> rewrite the kernel
Why would you do that? The kernel already has all the tools you need for isolating apps from each other. It's up to userspace to use these tools.
Because you don't bolt security on top of an existing system. You include it in the design of the system.
Can you please include enough technical details to have a discussion instead of making assertions so broad that they can't even be wrong?
It's more a matter of best practices than technical details.
You can build a skyscraper on top of the foundations of a shed, and the kernel devs have done an amazing job at that, but at some point you gotta conclude that maybe it is better to start from scratch with a new design. And security is a good enough reason.
If I'm able to do everything I can in my regular arch Linux installation, it would be nice to try an arch derivation that is immutable by design.
What I'm affraid is to start experimenting and finding more and more that my workflow is hindered either by some software not working because the architecture of the OS is incompatible, or by KDE UX design choices in the user interface.
That's not to say that it wouldn't be interesting, and it would say nothing about the quality of the software if I'd hit such walls, only that I'm not its target audience.
I find that I really like using an immutable distro with a custom image (built with github actions).
So I can really separate the system-level changes (in the image, version-controlled) from my user changes.
It's a nixos-like experience without using nix at all.
There have been a couple of things to have in mind, with my Bazzite installation, for creating users or adding groups for example, this pointed me to use systemd-sysusers but it was simple.
I've been wanting to do this! The plan was to modify the Bazzite DX version build script, but ultimately Fedora being base was a deal breaker for me. With KDE Linux this might finally be a dream come true.
It seems like KDE linux uses a different way to provide a system image than ostree on Fedora Silverblue, so I have no idea how easy it is to make changes on top of.
But for Bazzite (and other universal blue distros) you better use BlueBuild
https://blue-build.org/
In the end it's an OCI container image, so you could technically just have a Dockerfile with "FROM bazzite:whatever" at the top, but bluebuild automates the small stuff that you need to do on top of it, and allows you to split your config in files.
You can have a look at my repository to see how easy it is !
https://github.com/LelouBil/Leloublue
Yeah... At this point I would give into Nix for managing the underlying arch system. It's not a gentle learning curve I believe, but at least the community is strong around nix
That's what I use too on Bazzite, custom image for system level stuff, and home-manager for user-level stuff.
The nice thing about Fedora Silverblue's model is that it is literally a container image, so to "build" your image you can run any arbitrary commands, so it's way simpler than nix.
> If I'm able to do everything I can in my regular arch Linux installation
No, you can't.
If you want Arch but with snapshots and rollback, Garuda Linux does that by default. It is not immutable, though.
For snapshots and rollbacks, my backup strategy is enough with Borg. I also take an hourly inventory of the installed packages, so if I need I can go back a maximum of 7 days and a minimum of 2 and see what changed. It's usually enough.
I feel this is a false equivalence.
Atomic updates are not the same thing as backups.
Backups: my file is gone, overwritten, corrupted, I accidentally deleted contents I want... but my computer is working, so I will retrieve a copy from my backup.
Atomic updates: aargh, my computer was half way through installing 150MB of updates across 42 packages, but one file was bad and the update failed, so I rebooted and now it won't boot! No problem, reboot to the boot menu, choose the previous snapshot, a known-good config, and you can boot up and get back to work, until the update is available again.
Check out Arkane Linux. Unfortunately the documentation is quite sparse as of yet, but I think it's a very interesting concept.
Thanks for the suggestion. I find it very discouraging to experiment with sparsely documented projects, it feels like you are unwelcome in such projects.
I'm not a Linux user (yet) and I'd like to understand what "immutable" means here. Does it mean that I can't, eg, install Elixir or an IDE on it? I have absolutely no interest in deeply tuning the OS, which is why I'm interested here - I've been on Windows for decades for a reason. But if installing applications is blocked, or cumbersome, then who is this for?
It means the base system doesn't support individual package updates. Similar to a docker image, upgrading to the next version requires a complete base-image upgrade. In general, it shouldn't affect your ability to add additional software on top, but it may impact how you do so (e.g. Fedora Silverblue only allows Flatpak containers on top of the base OS).
Immutable here just means there is a base OS+libs that you don't touch. So now elixir or an ide would install in a sandbox with any needed libraries not included in that base instead of install all the libraries and stuff globally
So then if I do "mix deps.get" to fetch elixir libs, will that work? will it be able to compile files that are outside the sandbox?
if mix can work without sudo|root it will absolutely work in an "immutable distro" on the other hand: this particular immutable distro may not have all the c libraries BEAM/Elixir expect in that base and while SilverBlue does let you add to the base this doesn't sound like it will. So it might take some effort, hard to say at this point, though you can always add to your PATH
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'" Pacman is not included, and Arch is used only for the base operating system. Everything else, he said, is either compiled from source using KDE Builder or installed using Flatpak.
Funny; sounds more like a BSD (a prebuilt single-artifact Arch "base system" + KDE Builder-based "ports collection") than a Linux.
Oh yes, just what Linux needed, one more distribution. This will help accelerate year of the Linux desktop.
I agree with you in principle. However, this distro in particular makes use of an immutable base system, which although not new, is definitely not extremely common among Linux distros.
Immutable distros today feel like someone read a CNCF "best of" publication and decided to throw it at desktop Linux to see what sticks. Not everyone wants to be a DevOps engineer.
I think the concept has promise (see: ChromeOS) but the execution today is still way too rough.
A well maintained KDE Arch distribution sounds very nice. I love KDE and tolerate Kubuntu.
Note that it's not necessarily an "Arch distribution" in the sense you might expect:
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'" Pacman is not included, and Arch is used only for the base operating system. Everything else, he said, is either compiled from source using KDE Builder or installed using Flatpak.
This sounds fairly close to SteamOS in terms of structure. (Which seems to work well for its own use case, so I can see the logic.)
> Kubuntu
This is where I've been for the last 7 years. Very happy with it. I'm looking forward to an Arc Pro machine with SR-IOV GPU capability for VMs. That is pretty much my dream desktop, as much as I care to have one.
The premise "we write software which is installed on operating systems, so we need our own operating system as well" doesn't make sense. Also the point that there are other operating systems like elementary or gnome OS out there is a moot point. At least for elementary OS i kind of get the promise of some high quality user experience focused MacOS competitor.. But KDE OS? Why should I not just install KDE on my distro?
This distro doesn't seem to be born out of some real need for non-KDE-developers? Maybe it should be just some playground for KDE devs to test drive new tech?
> This distro doesn't seem to be born out of some real need for non-KDE-developers?
It's born out of a few things:
a) KDE as a community has increasingly focused on good and direct relations to end-users of late, which e.g. has resulted in most of the funding now coming from individual donors. Wanting to make more of their computer-using experience better isn't a strange impulse to have.
b) The community has hardware partners (e.g. laptop makers) that want to collaborate on pre-installing something with a KDE-focused out of the box and user experience. That has so far been Neon, which has a number of engineering and stability issues that have been difficult to overcome. KDE Linux is an attempt to improve on that.
c) It's also generally driven by a lot of the lessons learned from the SteamOS and Neon projects, and is attempting a lot of new solutions to risk-free updates and hackability, oob experience, and down the road likely also backups. The team does think there is a value prop to the distro as such beyond the KDE GUI.
d) The developer audience isn't unimportant either. More KDE developers on an immutable+sandboxed apps distro will mean more eyeballs on e.g. Flatpak problems, improving that for everyone else. Many recent new distros that ship Plasma by default (e.g. SteamOS, Bazzite, CachyOS, etc.) benefit.
So..
a) I get that a lot of users use KDE. And they love the Desktop Environment. But is there demand for an OS? Would those users switch? I hope so, but for such a big decision to build, support, maintain a whole OS, i'd expect some kind of poll maybe? Some input saying "30% of KDE users would switch to KDE OS"? Is there some kind of proof? I've been using Gnome OS for years but never felt i would want to switch to some Gnome OS. The Desktop Environment is one of many tools in my distro (for me, at least).
b) Supporting lots of hardware (expecially Laptops!) seems to be a huge time sink for people not primarly involved in kernel/driver stuff, or not?
c) ok..
d) Same as a): Will all KDE devs use KDE OS? And is it good to have the KDE Devs use KDE OS, when the majority of users use Arch/Debian/Ubuntu/Fedora? I'd rather have a good chunk of those devs use my distro...
I love using KDE and use it on all my desktop machines. I even have a source compiled version ready to test / hack on if I need - utterly fun and easy to build using kde-builder and works on most distros including Ubuntu/Debian, Arch and Fedora.
That said, I don't think having yet another immutable distro is a great idea if they are only going to punt and use Flatpaks. They can run flatpaks on any distro out there. So not really understanding the idea behind this. Nothing really stands out from the article - they still need to make KDE work great with most other modern versions of the distros so it isn't like Flatpaks based KDE is going to give them an edge in having the best KDE on their own distro.
What am I missing?
I could have sworn they have had this for a while... Nice that it is Arch Based, I wonder if they bothered to look at Arkane Linux which is also atomic, and the maintainer has all his scripts on how to do it available for anyone to make their own spin. I feel like it could have been beneficial for both KDE and the maintainer of Arkane Linux to work together.
can't wait for Hyprland Linux
> Unlike Fedora's image-based Atomic Desktops, KDE Linux does not supply a way for users to add packages to the base system. So, for example, users have no way to add packages with additional kernel modules.
But then, since / is rw and only /usr is read-only, it should be possible to install additional kernel modules, just not ones that live in /usr - unless /lib is symlinked to /usr/lib, as happens in a lot of distros these days.
Well, as long as they're either updating frequently or you're not using nvidia drivers (which are notoriously unpleasant with Wayland) I guess it's fine for a lot of people.
KDE made me fall in love with Linux. The familiar UI to Windows, the insane customizability, the snappiness - each and every one of their contributors are legendary.
Will this help KDE-Plasma finally move from pre-alpha more towards something that can be used daily, or will we still need another decade or two ?
Asking this as a user who really would love to move away from X11, but everytime I try anything Wayland related it's just alpha or pre-alpha, endless graphics glitches, windows going black or flickering, (double the glitches after turning display off/on),multiple rendering issues with Firefox, Clion etc..
I think I'm mentally preparing to use X11 until retirement....
The thing is the first 90% of software is the easy part. Once you've done that you still need to do the other 90%. And the latter 90% is what separates little hobbyist weekend projects from products. It's a relentless boring grind of testing, fixing bugs and sharp edges and adding workarounds.
When did you last try it? It's been rock solid for me since Plasma 6, and I use things like fractional scaling.
A week ago, KDE-Plasma 6 whatever is the latest on Arch.
Using NVIDIA proprietary.. glitching like MOFO. Looks slick but just way too buggy to be used.
Some things to try:
So yeah... pre-alpha.P.s. I also tried XFCE and Enlightenment.. and those are not any better (not that claim to be anything but pre-alpha).
Honestly.. on Windows11 the experience is just so damn smooth and slick. Nothing glitches or hangs. The Linux graphics stack just lags behind decade after decade... never catches up...
> Using NVIDIA proprietary
Ah, I haven't tried it on NVIDIA drivers in a while.
I'm doing a reinstall on my gaming PC soon, so I'll give it a shot then. I've been using it on Intel and AMD systems, and haven't had issues. But you know, they actually have drivers that are designed for the modern linux graphics stack.
> P.s. I also tried XFCE and Enlightenment.. and those are not any better (not that claim to be anything but pre-alpha).
So... maybe the NVIDIA drivers then? And not KDE Plasma?
> The Linux graphics stack just lags behind decade after decade... never catches up...
Come on, you can't really blame NVIDIA's dogshit drivers that refuse to integrate into the rest of the stack on the KDE devs.
No, what I mean with XFCE and Englightenment is that they admit being alpha.
Yeah, well the reality is that NVIDIA drivers are the drivers one wants to use on NVIDIA hardware (which many of us have.
And somehow they work fine on X11.
It's always nice to blame the driver vendor, but what has the Linux community the kernel team, the graphics team done to promote Linux and make it simple to write correct performant drivers for the platform? How many graphics memory allocations are there? How many buffer sharing APIs, are the kernel driver interfaces stable?
When other driver vendors have been able to work with the kernel team quite successfully, I think it starts to become fair to blame the other vendor.
I've been using KDE-Plasma on Wayland (Debian 13) since release as a daily driver, and I'm happy to report that it is super stable, has no problems with waking up from suspend and hibernate, and is a superb all-around shredder. I didn't notice any glitches, or flickering, or bugs so far, despite intensive daily abuse.
I love using KDE Plasma. All the best to the team!
Why tf is kde spending precious developer capacity with this?
Fedora atomic kde is close to perfect. Where is the need to reinvent the wheel?
For me it is natural that since the desktop environment is the most important part of the desktop operating system, it should have its own distribution.
Does it support Gnome?
Does it finally solve the package management problem?
Yes. The ChromeOS way.
There's no package manager and you can't install, remove, or upgrade packages.
You get whole-OS image updates from the distributor, just like iOS or Android.
But in Android/iOS you can install apps.
The apps do not install or update any included libraries in the base image of the OS. It may rely on an specific or minimum version of the OS but that's it. Everything the software needs is installed into its own sandbox, and other applications cannot share it.
Ok. This makes me wonder:
The original idea of shared libraries was that a computer system can save time and memory because they only need to be loaded once.
Is that idea dead?
If you want effective sandboxing, yeah.. pretty much. If no one can agree on which version of the library you're provided, then bring your own.
We should go back to static linking. With CI/CD generating new packages is trivially easy.
Then we can throw out all these fancy packaging tools like Snap and Flatpak, all the fancy half-done COW filesystems like Btrfs, all the fancy hidden-delta-synching stuff like OStree, and just ship one honking great binary in a single file that works on anything, no matter what the libc so it even works on musl or whatever.
Ha ha, only serious.
The best KDE implementation that I have seen is on Arch based distros (Arch, SteamOS, CachyOS, etc.).
Nothing else compares. Why reinvent the wheel?
I wouldn't say they are reinventing the wheel. Putting a new set of rims on them, maybe...
"KDE Linux is an “immutable base OS” Linux distro created using Arch Linux packages, but it should not be considered an “Arch-based distro”; Arch is simply a means to an end, and KDE Linux doesn’t even ship with the pacman package manager."
https://kde.org/linux/
Have you tried KDE on Fedora? I'm very happy with it.
What exactly is missing from say, KDE on debian? I recently installed it for my family computer and have had zero qualms with it.
CatchyOS is great, been using it for months and been good overall. There is also garuda linux, it looks great too, only tested it for a little though, worth trying if you are in your distro-hopping phase: https://garudalinux.org
Let's just run Haiku OS.
It seems that they are going to divert development effort from KDE. If so, it's really a bad move
I mean this is pretty much how people use MacOS: immutable base, individually packaged apps and brew on top for CLI things.
Doesn't sound too bad for work.
-What about KDE NEON?
From the article:
> Neon has ""served admirably for a decade"", he said, but it ""has somewhat reached its limit in terms of what we can do with it"" because of its Ubuntu base. According to the wiki page, neon's Ubuntu LTS base is built on old technology and requires ""a lot of packaging busywork"". It also becomes less stable as time goes on, ""because it needs to be tinkered with to get Plasma to build on it, breaking the LTS promise"".
I run KDE Neon and this checks out. However, it's a terrible idea to create another distro. The Linux world needs more cohesion. I might go back to Ubuntu if their KDE 6 is decent now. I use the DE for the purpose of running programs in the environment and that includes being able to easily set up things like CUDA, which is easiest with Ubuntu, a PITA with other options.
What do you mean by cohesion? I feel like cohesion with Linux would mean cohesion on Desktop Environment (which we have with GNOME and KDE being so popular) and packaging format (which we have with Flatpack)
I mean "Linux" (as a user OS) should for the most part be a common experience, so if a very technical or very non technical person wants to do something or someone wants to support someone doing ordinary things using a different distro, they aren't on a whole new arbitrary mini-adventure full of surprises, except at the desktop level.
Mind you even though I've been running Linux for decades, I have lost the enthusiasm for the low level details and am happiest when I can use apt for everything and have the OS manage dependencies and updates. I see a lot of negative comments about Flatpack and my experiences haven't been great, so I don't know if it is comprehensively good and will solve issues like low level drivers (GPUs).
I don’t understand the differences between each distribution. Is there a real difference?
The big one: a different combination of packages, i.e. which versions are available, and how they're configured and integrated. This generally also means they will have different package managers and configuration tools. Things have gotten a lot more regular between distros but there's still notable differences in philosophy between them, how much you notice kind of depends on how much of a power user you are and how prone to breakage your use-case and preferred applications are.
Distributions are like cars. They all get you from point A to point B, some of them will suit you less than others, and some people are really picky about which one they use for reasons.
Shifting on the wheel, floor, knob, buttons, etc. I've stuck mostly to Ubuntu/Debian based distros because I'm more comfortable with them and they have tended to be more sturdy/stable for my own usage (currently Pop COSMIC alpha though).
> Is there a real difference?
The main differences are related to packages. The package format (.deb, .rpm, etc), the package manager (dpkg/apt, pacman, dnf, etc), how frequently the packages are updated, if they focus on stability or new features, etc.
New Linux users that are used to Windows or Mac sometimes dislike a distro and like other, but actually what they really disliked what the desktop environment. For example, Kubuntu uses KDE Plasma as its desktop environment and its user experience are almost the same as Fedora KDE, Manjaro KDE, OpenSuSE and so on, while it's very different to the default Ubuntu (that uses GNOME). But, under the hood, Ubuntu and Kubuntu are the same (you can even uninstall KDE and install GNOME).
Actually, other Unix-based systems can install the same desktop environments that we have on Linux, so, if you have a FreeBSD with KDE you won't even notice the difference to Kubuntu at first, even though it's a completely different operating system.
tl;dr: there's a real difference, but from a user perspective it's mostly under the hood, not exactly in usability.
Yes, depending on the distributions you are comparing the differences are trivial to radical to the point of making comparisons impossible.
I'm using Debian with the Plasma desktop, so I have a taskbar.
Will this impact me?
no
KDE seems to reinvent the wheel here and I wonder where they are going with that. There are pretty mature "immutable" distributions out there that could serve as a foundation and offer a lot of the same features that KDE Linux is supposed to support. For example, Aeon (of openSUSE MicroOS vintage) looks like all KDE Linux is aiming for, just with Gnome as DE.
But hey, more power to them.
There's a fair amount of overlap and collaboration in the engineering communities behind the different image-based/appliance OS projects, so it's not necessarily as redundant as you might think it is. E.g. the developers behind the distro tech behind KDE Linux, Gnome OS and Kinoite are pretty friendly with each other.
And of course the distros end up sharing the gross of the application packages - originally a differentiator between the classic distros - via e.g. Flatpak/Flathub.
One reason we're doing KDE Linux is that if you look at the growth opportunities KDE has had in recent years, a lot of that has come from our hardware partners, e.g. Slimbook, Tuxedo, Framework and others. They've generally shipped KDE Neon, which is Ubuntu-based but has a few real engineering and stability challenges that have been difficult to overcome. KDE Linux is partly a lessons-learned project about how to do an OEM offering correctly (with some of the lessons coming out of the SteamOS effort, which also ships Plasma), and is also pushing along the development of various out-of-the-box experience components, e.g. the post-first-boot setup experience and things like that.
> For example, Aeon (of openSUSE MicroOS vintage) looks like all KDE Linux is aiming for, just with Gnome as DE.
And Kalpa is that just with Plasma as DE.
sounds like an omarchy competitor
would it be better than fedora kde???
Other than being immutable, I doubt it. Immutable distros tend to rely on flatpaks to dynamically install new packages. Unfortunately the flatpak codebase is largely unmaintained at this point, and nearly impossible to get changes merged in.
Reference to Flatpak allegedly not being "actively developed" anymore:
https://news.ycombinator.com/item?id=44068400
Nope: https://github.com/flatpak/flatpak/pulse
There is already an immutable Fedora with KDE called Kinoite
https://www.fedoraproject.org/atomic-desktops/kinoite/
So this replaces Neon (Ubuntu based) with Arch based distro.
Their distro seems somewhat confused.
According to kde.org/linux it comes with Flatpak and Snap. Distrobox and Toolbox. They don't seem to just pick a lane to be consistent, it's all kind of random.
It's at an alpha stage; it's reasonable to see what people will use, also because having an immutable base and needing tools to install things on top is still somewhat new.
KDE and Gnome are footing Flathub together and a lot of the community effort goes into Flatpak packaging.
I did not realize anyone outside of Ubuntu used snap. When I was on Ubuntu, I had many annoyances with snap, but not sure if they have since improved the experience.
I wish them luck. But going waylands only instead of supporting X11 means they're throwing away all accessibility support that is integrated into all linux software. Their toy distro won't be ADA compliant and I certainly won't use it since it lacks screen reader support.
Not called "Kinux" or "Linuks" or something? Missed opportunity.
or kinos ;p
kOS!
This has been hammered on by very prominent voices a lot. Stop making new "distros". Especially if you just want different defaults. You should be able to declare the defaults and apply them to your base distro, and if you can't there's your problem.
Most distros could be NixOS overlays. Don't like satan's javascript? Try Guix. Bottom line, the farther I get away from binaries discovering their dependencies at runtime, the happier I am.
I can’t really imagine what compels those voices—if some group wants to make a distro, the should go for it. Worst case it will just fail.
And let's also imagine what compels people to recommend enthusiasts onto paths where they will be more successful.
Maintaining distros that are not some kind of overlay that can track the underlying base automatically is just asking for more maintenance than people will want to do while also Balkanizing options for users because while overlays can be composed, distro hopping very much does not compose.
Relevant XKCD: https://xkcd.com/359/
There really is no such thing as a "new distro" these days. Everyone with the itch to roll their own is Debian or arch, with a tiny handful of cool kids hacking on nix instead. Scanning down:
> KDE Linux is an immutable distribution that uses Arch Linux packages as its base, but Graham notes that it is "definitely not an 'Arch-based distro!'"
Definitely not, indeed.
Honestly find Debian Testing good enough for latest KDE Plasma. I have never understood the need for a specific distro for your desktop software and have never found Neon useful.
The only pain point I really found even developing for KDE on Debian was the the switch from qt 5 to 6 but that is always a risk and you can just compile qt from src.
Another pain point is their dev package manager doesn’t have a way to conveniently target library/package branches. So you can spend a fair amount of time waiting for builds to fail and passing in the library or package version to the config file. Very tedious and no doubt cost me lots of time when trying to build on top of Akonadi for example.
> find Debian Testing good enough for latest KDE Plasma
Latest as in "lagging for weeks while people in Ubuntu eat the bugs".
People on Arch are eating the bugs too. I think KDE would go MUCH farther if they just made their tooling a little easier and bundled that well enough. They wouldn't need a separate distro.
Ubuntu and Debian have the same maintainers for KDE and nearly identical packages so bugs are shared.
And? Regardless most bugs probably get fixed before landing in Debian.
I also never said “latest” packages. That is some heavy lifting done by you.
Ah, yes, the KDE people are definitely the people I trust most to deliver a reliable system and not go crazy chasing incongruent rewrites of things while abandoning what works...
/s
I approve of this - Linux distributions need to go and they needed to go about 20 years ago. They are the fundamental reason why Linux is not successful.
Distributions are literally the worst thing about Linux - and by worst I really mean it in a way that is filled with the most amount of disgust and hate possible, like one feels toward a literal or social parasite.
Linux distros provide little to no value (after all these people just package software), they are just vehicles for petty losers to build their own fiefdoms, where they can be rules. They are (and the people who run them) acid on the soul, they poison the spirit of openness and sharing, by controlling who gets to use what
There existence was always political and the power they wielded over who gets to use and see your software was stomach-churningly disproportional to the value they provided.
Much like petty internet forums with pathethic power tripping mods, a given linux distro's maintainers get to decide that you, the dear programmer, the actual creator of value, gets to have his work judged, and his right to deliver his software to users by a distro maintainer a petty tyrant who might not have the time or might have some weird mental hangup about shipping your software. And even if they do, they might fuck up your package and the distro-crafted bugs will reflect badly on you.
I can shit on Microsoft and Apple all I want and it'll never impede my ability to deliver software to my users.
This is why open source failed on the desktop, and why we have three orders of magnitude more open-source zealots, and ignorant believers than actual programmers who work on useful stuff.
Why no one with actual self-respect actually builds software for the Linux desktop out of their own free will, and why garbage dumps and bugs and missing features persist for decades.
Imagine the humiliating process it takes for a dev to ship a package on Linux - first you have to parlay with maintainers to actually include your stuff. Then they add a version that's at best half-year out of date to jive with their release cadence. You're forced to use their vendored and patched libraries which are made bespoke for their use cases, and get patched for the 5 apps that they care about, and can break your stuff at a drop of a hat.
And no, you can't ship your own versions, because they'll insta reject your package.
This is literal Windows 98 dll hell, but Microsoft was at least a for-profit company you could complain to and they actually had a financial stake in making sure users software worked. Not so with Linux distros, they just wanna be in charge and tell everyone what they get to use.
Then you have
First, Ubuntu and snap should burn in hell. Much like their other efforts, they made an universal system that's hated by everyone and used by no one except for them and they keep pushing it with their trademark dishonest tactics copied from other dishonest vendors, like even if you get rid of the excrement that is snap, they keep reinstalling it via updates.
Flatpak was meant to work like a reasonable package manager would - you assume a stable OS base and demand and provide that, full stop. This is how Windows and Mac OS worked forever, and it doesn't even occur to devs that people using these OSes will have trouble running their software.
As I expected - downvoted but not countered - the zealot scoundrel shows his true face - his tools are not of reason but whipping his herd of loyal mouthbreathers and turning them against people who disagree with him.
> installed using Flatpak
So essentially people are abandoning the memory/speed efficiency of the .so ecosystem, and seeking exe/msi style convenience... You know... a dump of legacy dll/static-so-snapshot versions with endless CVEs no one will ever be able to completely fix or verify.
Should be fun, and the popcorn is waiting =3
If you ever used flatpaks you would know that they are very noisy about dependencies being not up to date.
They also gain substantial amount of security by being sandboxed by default unlike majority of native packages.
https://ejona.ersoft.org/archive/2024/03/03/flatpak-perm-sur...
Flatpaks can have insecure permissions which are not only transparent but easily editable. Meanwhile native packages are guaranteed to have insecure/all permissions.
In general, SELinux profiles use Mandatory Access Control, and not Discretionary Access Control. However, most desktop users find it difficult to understand, and often have bigger problems from reading silly posts off the web.
An outdated old package library relies on people understanding/tracking the complete OS scope of dependencies, and that is infeasible for a small team.
If someone wants in... they will get in eventually... but faster on a NERF'd Arch install. =3
>most desktop users find it difficult to understand, and often have bigger problems
That is exactly the strong point of flatpaks. It's a lot easier to use toggle in a GUI for permissions than write whole new profiles. Not to mention that many even disable selinux because it is difficult.
>An outdated old package library relies on people understanding/tracking the complete OS
It takes 0 understanding to copy paste a outdated package warning and report that to the repo listed in flathub. It explicitly tells you as much.
It seems the AstroTurf'ing folks buried the parent as children often do.
But thanks for trying to post actual relevant data on the topic. =3
"Popcorn Music Video" (The Muppets)
https://www.youtube.com/watch?v=Gwg5ey6236o
Security/dependancy updates depend solely on the specific maintainers. The platform itself doesn't automatically fix the developer or maintainer lethargy in this regard.
Yes obviously but it gives the user a clear alert to inform the package maintainer or remove the package.
This doesn't work. One will need to time-travel back to a LUG in the early net to understand the sirens song of tar balls =3
Snap and Flatpaks only real legitimate use-case is legacy compatibility:
1. Current release applications on deprecated OS (Mostly good)
2. Deprecated applications on current OS (Mostly bad)
The Windows style packaging architecture introduces more problems than it solves. Fine for running something like Steam games with single shot application instances using 95% of system resources each power cycle, but folks could also just stick with Windows 11 if convenience and security-theater is their preference.
Some people probably won't notice the issues, but depends what they do. Arch Linux itself is a pretty awesome distro for lean systems. =3
>single shot application instances using 95% of system resources each power cycle
Source? There is no measurable energy or efficiency difference at least for flatpak on any semi recent hardware. I know that snaps do take couple seconds longer at first start.
I prefer flatpaks for proprietary and internet facing applications because of there easy sandboxing capabilities. There is also the advantage on archlinux not needing to do a full system update for a single application.
People often started here:
https://tldp.org/HOWTO/Program-Library-HOWTO/shared-librarie...
Getting into why the community argued for years while Debian brought up deb version controlled packaging is a long dramatic conversation. Some people liked their tar ball mystery binaries, and the .so library trend started more as a contest to see how much people could squeeze out of a resource constrained machine.
In a single unique application running context, the power of a cached .so reference count are less relevant. As a program built with .so may re-use many resources other programs or itself likely already loaded.
> ldd --verbose /usr/bin/bash
> ldd --verbose /usr/bin/cat
Containerization or sand-boxing is practically meaningless when punching holes for GPU, Network, media and HMI devices. Best of luck =3
>Containerization or sand-boxing is practically meaningless when punching holes for GPU, Network, media and HMI devices
Many applications don't need these permissions and even the ones that do will be much more secure than having full user space access by default.
Someone could exploit the system to gain more access vs someone does not need to do anything because they have full access by default. It's like arguing you don't need a root password because sudo is insecure anyway.
Not really, if some noob deploys janky code they don't understand, than someone will eventually worm it for sure. Containerization has not prevented an uptick in nuisance traffic from Cloud providers, but made it orders of magnitude worse.
Qubes, Gentoo, and FreeBSD are all a better place to start if you are interested in this sort of research. Best of luck =3
But also hilariously still paying the runtime cost of ELF dynamic linking instead of just static linking so at least you avoid, e.g. GOT indirection overhead.
Again, static linking would only be useful in a single unique App run and dump scenario. People do link and strip .a sometimes when porting to Windows and MacOS.
Some programs take a huge memory and performance hit on non-Linux machines. =3
> Some programs take a huge memory and performance hit on non-Linux machines
You're implying without stating it (or providing any evidence) that programs perform worse when statically linked than when assembled out of ELF DSOs, even when each of those DSOs has a single user.
That makes no technical sense. Perhaps you meant to make a different point?
An 8kB program loads and runs much faster if the .so it uses is already cached due to prior use.
A 34MB static built version will cost that amount of i/o every single instance on a system that did not cache that specific program previously. Also it will take up that full amount of ram while loaded every single time it runs.
Inefficient design, but works fine for other less performant OS =3
> 34MB static built version
I've forgotten how to count that low.
Also, static programs are demand paged like anything else. Files aren't loaded as monoliths in either case. Plus, static linking enables better dead code elimination and devirtualization than is possible with an AOT-compiled and dynamically linked setup, which usually more than makes up for the text segments of shared dependencies having been pre-loaded.
I'm not sure you have enough technical depth to make confident assertions about linking and loading performance.
> =3
The "blowing smoke" emoticon isn't helping your argument.
If a stripped static linked library saved that much space, than people probably chose the wrong library resources. Sometimes ripping off unreachable areas also has unintended consequences, but stripping debugging resources is usually safe.
If .so reuse is low, or the code is terrible... it won't matter much. Best of luck =3
> I think all the major producers of free software desktop environments should have their own OS
Absolutely insane suggestion.
KDE seems to be losing the plot here-- how does this help build the best possible DE for the community? I feel like they are fragmenting developer attention and time by futzing around with this.
Meanwhile there are issues that haven't been solved for months; the latest Plasma version has barely any decent themes (the online community theme submissions seem to be wrought with spam), Discover is not really useful, needs curation, settings and configuration is everywhere to be found which is great for the average power-user, but hard to know what you can tweak without being overwhelmed. Flatpak is great, but really needs improving, more TLC and work towards cleaning up. It's looking more and more like the Android App Store every day.
KDE needs to stop trying to be everything to everyone and start getting a little more opinionated. I'd rather have a few well maintained components of a DE than many components that are no better than barely polished turds.
In any case, it's my favorite DE and each/every KDE developers are absolute legends in my mind.
> KDE seems to be losing the plot here-- how does this help build the best possible DE for the community? I feel like they are fragmenting developer attention and time by futzing around with this.
A lot of the manpower working on this previously worked on KDE Neon, so it's perhaps better to think of it as a lessons-learned project that doesn't in fact do what you worry about (but it has already attracted new contribitors that also improve things elsewhere).
KDE also does serve users (and hardware partners) with Neon that deserve improvement.
There's also the fact that increasingly new users experience KDE software e.g. as Flatpaks on new distros that ship Plaama by default, e.g. Bazzite and CachyOS, and it makes sense to get more developer eyeballs on this to make sure it's a good experience.
KDE GNU/Linux
What you're referring to as KDE GNU/Linux, is in fact, KDE Wayland GNU/Linux, or as I’ve recently taken to calling it..
I use Arch, btw
After decades of development and billions of dollars in investments can we have just 1 distro that works as smooth as MacOS and then we can get back to having 2000 others for that one time we need to run it on a coffee maker
I don't know that that will happen- not even Windows is as smooth as MacOS. But that's because Microsoft and Linux developers are tackling a more difficult problem- getting an OS to work with effectively infinite hardware permutations. Apple has given themselves an easier problem to solve, with just a handful of hardware SKUs and a few external busses.
That said, Android is pretty stable, because a given Android distro typically only targets a small hardware subset. But I don't think that's the kind of Linux distro that most people contributing to FOSS want to work on.
Apple has also yanked backwards compatibility a few times. I bet Microsoft would love to trash a few legacy API decisions from decades ago.
That being said, I still think Microsoft should have developed a seamless virtualization layer by now. Programs prior to X year are run in a microVM/WINE-like environment. Some escape hatch to kill off some cruft.
Microsoft did both.
Pretty sure you're talking about ChromeOS
I tried macos once and did not like it. I will not be contributing to your little project, but I bid you good luck.
I had to use it ~2 years for work and am glad that I am back to Linux. The amount of instabilities, bugs, lack of features or removed(!) features between updates, missing software packages, horrible user experience... was just astonishing. You need a lot of fanboyism to cope with that.
What billions?
Arch based? No thanks. Flatpak? Definitely no thanks.