Knowledge silos and expertise are two sides of the same coin. From full stack engineering to DevOps practitioner, our industry loves to pretend everyone can do everything. We’re an industry of hobbyists. We love to tinker. I don’t know if we are fooling ourselves or if the industry has been exploiting our hobby-driven nature, but it’s time for DevOps to get thrown out of an airlock.

  • ChojinDSL@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I hear ya.

    I’m an old school linux sysadmin. Been at it since 2003. I always groan when some higher-up wants us to host stuff on AWS and kubernetes, because they have no idea how stuff works, they just like the buzzwords and how everybody else is doing it this way.

    And I’m just thinking, great. Instead of renting a very affordable beefy server with Hetzner or OVH, we instead use AWS which costs us 3-5x as much for less performance. And I try to explain to them, that not every application is like Netflix where you can just throw more nodes at it in order to scale.

    Docker I like, since it can make things easier, with regards to ensuring a consistent environment, once you know your way around it. But kubernetes and AWS I hate with a passion. With AWS I can never truly know how much it’s gonna cost in advance, since their pricing structure is so mind boggingly complicated and you get billed even if so much as a fly farts in one of their server rooms.

    Another thing is, every server provider that I’ve ever worked with, provides you with a means to boot a rescue mode if something goes wrong, so that you can fix it and reboot. No, with AWS, you need to make a snapshot, fire up another instance, attach that snapshot as a volume and mount it, then try and fix stuff, then shutdown that instance, detach that volume, then re-attach it using the same device name as the root device on your original instance. Oh, doesn’t boot? Rinse/Repeat. This shit can take hours. Whereas with a “normal” root server and a IPMI console and a rescue boot, it’s a question of minutes (usually).

    Same with backups. I always ensure to create a backup system, so that I can restore certain files individually. Yeah, you can have automated snapshots of your EC2 Instance. But guess what, if something goes wrong, you boot the entire snapshot. So even if you just need to recover a single file, you still need to go through the process of reverting the entire system to a previous snapshot.

    Bah, I could go on and on, but honestly the whole thing just seems to create more and more abstraction layers that just makes it difficult to figure stuff out when something doesn’t work.

    The only thing, that I can somewhat appreciate with AWS is the ease with which you can migrate to a more powerful instance if you need it. Or how quickly you can launch a new instance. But thats about it.

    In previous companies, we would have a couple of beefy servers that would run multiple apps on a single server, with perhaps a failover server being at the ready. Our server Budget was maybe 1-2k € per month.

    With AWS it’s like 5x that.

    • lightrushOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      You just need Terraform. 🤭

      On a serious note, Docker is brilliant for many reasons, some of which you mentioned. I think K8s is great too. With that said, running K8s just because is mindbogglingly stupid. I’ve seen this done in corpos I’ve worked exactly as you said it. In addition, one doesn’t have to run K8s on AWS. Or any other public cloud. Or anywhere where it doesn’t make sense. You make pretty good points about AWS. If it’s better in some way to run on metal, you run on metal. And if it makes sense for an application to be deployed on K8s, that might also be run on top of the metal you have. But here’s another and I think potentially more significant point against AWS and the likes. They introduce lock-in. Every public cloud has its own services with their own APIs and features. The moment you buy into those, e.g. Cloudwatch automation, you can no longer move the workload to a place that doesn’t have Cloudwatch. And no place other than AWS has it. And you’re locked in. 👏 Developers (me) find 10 easy solutions in some marketing wank from AWS, all of them proprietary, stitch them together to implement some service and bam, that service forever belongs to AWS. You pay per fly fart from here on out, until you rewrite it.

      In fairness to Terraform, it does indeed make life easier if you have to juggle workloads across public clouds.

  • TCB13@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Maybe if people spent a bit of time making producible environments and bundled stuff properly we wouldn’t be in this mess. DevOps is one of the side effects of the Docker virus and the “cool” architectures and stuff that is so complex that nobody can re-install/re-configure in a reasonable amount of time.

    • lightrushOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Oh, DevOps is older than Docker for sure. We were doing DevOps on an OpenStack private cloud before Docker was on the radar. DevOps has nothing to do with tech and everything to do with non-technical people imagining theoretical benefits if “silos were broken”, etc.

      • TCB13@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Yes, you’re completely right. However for the general public / most companies DevOps only became a thing when they moved into Docker and realized things were so out of hand at that point they had to hire specific people to handle the issue… then proceeded to call them DevOps (among other things). And yes, this completely subverts the DevOps philosophy as it got kind of “productized”.

  • fourstepper@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    honestly this just seems like venting based on your own experience of a terrible implementation that just so happened to be called “devops” in your company