I see all the drama around Red-hat and I still don’t get why companies would use RHEL (or centos when it existed). I was in many companies and CentOS being years behind was awful for any recent application (GPU acceleration, even new CPU had problems with old Linux kernels shipped in CentOS).

Long story short the only time one of the company I worked in considered CentOS it was ditched out due to many problems and not even being devs/researchers friendly.

I hear a lot of Youtube influencers “talking” (or reading the Red-Hat statements) about all the work Red-Hat is doing but I don’t see any. I know I dislike gnome so I don’t care they contribute to that.

What I see though is a philosophy against FOSS. They even did a Microsoft move with CentOS (Embrace, extend, and extinguish). I see corporate not liking sharing and collaborating together but aiming at feeding of technology built as a collective. I am convinced they would love to patent science discovery too. I am pretty sure there is a deep gap in philosophy between people wanting “business-grade” Linux and FOSS community.

If you have concrete examples of Red-Hat added value that cannot be fulfilled by independent experts or FOSS community, I’d really like to hear that.

  • corsicanguppy
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    CentOS being years behind was awful

    You’re not doing it right. There’s absolutely a reason why enterprise linux works with a version that’s more-or-less locked in place (but for security updates, like a maintenance fork). You need to understand the value you’ve been overlooking.

    • ten years of a stable platform. Because, yeah, it’s not this week’s release with the glitter, but it’s also not a moving target of broken suck you have to constantly chase, and you can actually do dev.
    • dependencies are figured out
    • updates are trivial when security stuff comes out. Honestly – yum+cron is so stable and reliable, and the compatibility is part of the guarantee; so you - and your customers - don’t have to worry or - please god no - delay updates to gauge risk. Since updates are 99% of the work to avoid exploits, this goes from ‘huge risk’ to ‘no-brainer’. And you don’t have to worry that your dev environment is non-trivial to replicate on the daily.
    • your requirements for your software becomes ‘EL7’; maybe ‘EL7+EPEL7’ or so. Your installation process becomes ‘add repo RPM which pulls in other repo RPMs. yum install’ , and you’re already onto mere config work.
    • validating the install isn’t ‘did you install this ream of apps from this particular week in time, then run some wget|sh bullshit? Now run this other set of commands to confirm your installation’ – but in our case is just ‘rpm -qa|egrep’ or even an snmpwalk.
    • not working? Give me your one config file and your rpm-qa and we’ll replicate here trivially and find out why. (I didn’t work in support, but I liaised with them a bunch in the security work, and that was common practice) Tossing 4 lines and a <<EOF construct into a vagrant config is just so easy, now, and gives the entire machine to play with.

    As someone who used to dev a notable app in the past, cross-distro problems alone made so many of the fringe OSes impossible to support, and so we didn’t. EL was the backbone because we respected what we had.

    I just can’t figure out why this-week’s-glitter is more important than losing the install/support/update/validate burden by choosing a stable platform to work within. Life’s too short to support dependency hell or struggle just to replicate a failing setup in your lab for testing. Do you just not support customers?