I’ve been writing code professionally for 24 years, 15 of which has been Python and 9 years of that with Docker. I got tired of running into the same complications every time I started a new job, so I wrote this. Maybe you’ll find it useful, or it could even start a conversation, but this post has been a long time coming.

Update: I had a few requests for a demo repo as a companion to this post, so I wrote one today. It includes a very small Django demo user Docker, Compose, and GitLab CI.

  • Daniel QuinnOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    It sounds like you’re confusing the application with the data. Nothing in this model requires the use of production data.

    • Martín@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      Just trying not so confuse realistic testing with self-deception :) Not convinced testing with synthetic data can pretend to be similar to a production environment.

      • Daniel QuinnOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        But there’s nothing stopping you from loading realistic (or even real) data into a system like this. They’re entirely different concepts. Indeed, I’ve loaded gigabytes of production data into systems similar to what I’m proposing here (taking all necessary precautions of course). At one company, I even built a system that pulled production into a developer-friendly snapshot while simultaneously pseudo-anonymising that data so it can be safely (for some value of ${safe}) be tinkered with in development.

        In fact, adhering to a system like this makes such things easier, since you don’t have to make any concessions to “this is how we do it in development”. You just pull a snapshot from the environment you want to work with and load it into your Compose session.