• Arghblarg
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    17 hours ago

    Agreed there – it’s good for onboarding devs and ensuring consistent build environment.

    Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…

    But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.

    And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.

    • Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      The container should always be updated to march production. In a non-container environment every developer has to do this independently but with containers it only has to be done once and then the developers pull the update which is a git style diff.

      Best practice is to have the people who update the production servers be responsible for updating the containers, assuming they aren’t deploying the containers directly.

      It’s essentinally no different than updating multiple servers, except one of those servers is then committed to a local container respository.

      This also means there are snapshots of each update which can be useful in its own way.