The reality is that reliable backports of security fixes is expensive (partly because backports are hard in general). The older a distribution version is, generally the more work is required. To generalize somewhat, this work does not get done for free; someone has to pay for it.

People using Linux distributions have for years been in the fortunate position that companies with money were willing to fund a lot of painstaking work and then make the result available for free. One of the artifacts of this was free distributions with long support periods. My view is that this supply of corporate money is in the process of drying up, and with it will go that free long term support. This won’t be a pleasant process.

  • lemmyng@beehaw.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 months ago

    The rationale for using LTS distros is being eroded by widespread adoption of containers and approaches like flatpak and nix. Applications and services are becoming less dependent on any single distro and instead just require a skeleton core system that is easier to keep up to date. Coupled with the increased cost needed to maintain security backports we are getting to a point where it’s less risky for companies to use bleeding edge over stable.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      14
      ·
      11 months ago

      widespread adoption of containers and approaches like flatpak and nix

      And it’s about flippin time. Despite predating app stores by decades, the Linux package systems have been surprisingly conservative in their approach.

      The outdated and hardcoded file hierarchy system combined with the rigid package file management systems have ossified to a ridiculous degree.

      It’s actually telling that Linux packaging systems had to be circumvented with third party approaches like snap, flatpak, appimage etc. — because for the longest time they couldn’t handle stuff like having two versions of the same package installed at the same time, or solving old dependencies, or system downgrades, or recovery etc.

      Linux had advanced stuff like overlayfs 20 years ago but did not use any of it for packages. But we have 20 different solutions for init.

      • Manbart@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Like everything, it’s a trade off. Windows allows different versions of the same libraries, but at the cost of an ever growing WinSXS folder and slow updates

    • tetha@feddit.de
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      And that skeleton of a system becomes easier to test.

      I don’t need to test ~80 - 100 different in-house applications on whatever many different versions of java, python, .net and so on.

      I rather end up with 12 different classes of systems. My integration tests on a buildserver can check these thoroughly every night against multiple versions of the OS. And if the integration tests are green, I can be 95 - 99% sure things will work right. The dev and testing environments will figure out the rest if something wonky is going on with docker and new kernels.