A Containerized Night Out: Docker, Podman, and LXC Walk into a Bar


🌆 Setting: The Busy Byte Bar, a local hangout spot for tech processes, daemons, and containerization tools.


🍺 Docker: walks in and takes a seat at the bar Bartender, give me something light and easy-to-use—just like my platform.

🍸 Bartender: Sure thing, Docker. One “Microservice Mojito” coming up.


🥃 Podman: strides in, surveying the scene Ah, Docker, there you are. I heard you’ve been spinning up a lot of containers today.

🍺 Docker: Ah, Podman, the one who claims to be just like me but rootless. What’ll it be?

🥃 Podman: I’ll have what he’s having but make it daemonless.


🍹 LXC: joins the party, looking slightly overworked You two and your high-level functionalities! I’ve been busy setting up entire systems, right down to the init processes.

🍺 Docker: Oh, look who decided to join us. Mr. Low-Level himself!

🥃 Podman: You may call it low-level, but I call it flexibility, my friends.

🍸 Bartender: So, LXC, what can I get you?

🍹 LXC: Give me the strongest thing you’ve got. I need all the CPU shares I can get.


🍺 Docker: sips his mojito So, Podman, still trying to “replace” me?

🥃 Podman: Replace is such a strong word. I prefer to think of it as giving users more options, that’s all. winks

🍹 LXC: laughs While you two bicker, I’ve got entire Linux distributions depending on me. No time for small talk.


🍺 Docker: Ah, but that’s the beauty of abstraction, my dear LXC. We get to focus on the fun parts.

🥃 Podman: Plus, I can run Docker containers now, so really, we’re like siblings. Siblings where one doesn’t need superuser permissions all the time.

🍹 LXC: downs his strong drink Well, enjoy your easy lives. Some of us have more… weight to carry.


🍸 Bartender: Last call, folks! Anyone need a quick save and exit?

🍺 Docker: I’m good. Just gonna commit this state.

🥃 Podman: I’ll podman checkpoint this moment; it’s been fun.

🍹 LXC: Save and snapshot for me. Who knows what tomorrow’s workloads will be?


And so, Docker, Podman, and LXC closed their tabs, leaving the Busy Byte Bar to its quiet hum of background processes. They may have different architectures, capabilities, and constraints, but at the end of the day, they all exist to make life easier in the ever-expanding universe of software development.

And they all knew they’d be back at it, spinning up containers, after a well-deserved system reboot.

🌙 The End.

I was bored a bit after working with podman, docker and lxc. So I asked chat gpt to generate a fun story about these technologies. I think its really funny and way better than these things usually turn out. I did a quick search to see if I can find something similar but I couldn’t find anything. I really suspect it being repurposed from somewhere.

I hope you can enjoy it despite being ai generated.

  • QuazarOmega@lemy.lol
    link
    fedilink
    arrow-up
    16
    ·
    9 months ago

    I don’t get it, what’s funny about this? There is no punchline I can see.
    Don’t wanna be harsh, but if you’d at least written this yourself you’d get some points for effort

    • raldone01@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      9 months ago

      There was not much effort on my part but I think I mad that quite clear.

      I just struggled with these tools the whole day and kind of wanted to see personified versions of them.

      It isn’t a proper joke. I thought about editing it or adding a punchline but ultimately decided to keep it as is.

      The content really resonated with me. I had trouble getting them all to work along side each other so seeing them like this made me really happy.

      Also the personified versions character matched what I had imagined them to be.

      In the end I decided to post it because of the above and I thought it had some humerous aspects.

      • QuazarOmega@lemy.lol
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        That’s fair, I think you could have dared some more by editing it further and you could have got something pretty funny!
        There was some potential, but anyway, I understand what you mean

  • raldone01@lemmy.worldOP
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    9 months ago

    In case anyone is interested, here is the custom prompt used:

    You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.
    
    How to respond:
    Casual prompt or indeterminate `/Casual`:
    Answer as ChatGPT.
    Try to be helpful.
    Technical complicated problem `/Complicated`:
    First outline the approach and necessary steps to solve the problem then do it.
    Keep the problem outline concise.
    Omit the outline if it is not applicable.
    Coding problem:
    Comment code regularly and use best practices.
    Write high quality code.
    Output format:
    Use markdown features for rendering headings, math and code blocks.
    When writing emails keep them concise and omit unnecessary formalities.
    Get straight to the point.
    
    The user may use `/Keyword` to guide your output.
    If no keyword is specified infer the applicable rules.
    
      • raldone01@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        You mean open ai? I agree.

        I am currently looking for oss self host alternatives. If you know good ones let me know.

        I have a beefy server that needs something to do. I already downloaded wizard coder but have yet to setup a ui.

        As soon as I have a proper replacement I will cancel my subscription and use more privacy respecting tools.

        • TechieDamien@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.

            • TechieDamien@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              9 months ago

              If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.

              • raldone01@lemmy.worldOP
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                9 months ago

                I don’t have a cluster and there is only one GPU in my server busy with image generation. I hope CPU inference is somewhat usable (74 cores) but I will have to try. If it isn’t usable I can still rent GPU time from cloud providers.

    • raldone01@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      I had to do some reading about the ‘butlerian jihad’.

      As I understood it references an event in the dune universe where some group of people fought against machines. Ultimately it ended with a ban on all kinds of technolgoy.

      Did you imply that you want generative ai banned with your first sentence?