And it failed spectacularly.

We only needed a simple form, but we wanted to be fancy, so we used “nextcloud forms”.

The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower. No warning whatsoever. It’s an official app, couldn’t they wait that it was ready for NC 30 before launching it? The newsletter boasts “NC hub 9 is the best thing after sliced bread” yet i don’t see any difference both in visual or performance compared to NC hub 2

Conclusion: we made our business to rely on nextcloud forms as a signup form, but the only reason we were using it was disabled who knows how many weeks ago.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    3
    ·
    20 hours ago

    Oh, Nextcloud docker is a joke. They follow no standards or best practices when it comes to docker. They keep the entire app directory mounted as a volume, which means it does upgrade you without you “needing” to upgrade the docker image. They have volumes within volumes they need to mount. Their configs can (and do) override environment variables. Most actions that need to be taken require running an occ command which can only be done by exec’ing into the container.

    Nextcloud docker is honestly just such a joke. They should have rethought their application from a docker sense and they didn’t. God just number one - Docker images should never update. It’s a freaking pinned version for a reason. If I want to update, it should be as simple as upping the version tag, and it does any upgrades in place when I do that.

    I honestly steer people away from Nextcloud now because of how mismanaged their images are.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      17 hours ago

      Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

      The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

      In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

      The packages in the Arch repo are legit saner than the Docker version.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        I had to learn how to mount subpaths for their terrible container, and god just the updater is mind boggling. And I have to store their code in a volume, because of course I have to, why would code and configuration ever need to be… configurable? I actually just tried to put their config.php into a ConfigMap just to try, and of course PHP doesn’t allow that - not that I blame PHP for it - but ffs it’s been years, it’s time to allow config to also come from a yaml or something.

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            Yeah I’ve thought about migrating, but I have a few users on it who use nextcloud regularly now, so I’m forced to support it - unless there’s an easy migration path

        • Max-P@lemmy.max-p.me
          link
          fedilink
          English
          arrow-up
          3
          ·
          16 hours ago

          Having the web server be able to overwrite its own app code is such a good feature for security. Very safe. Only need a path traversal exploit to backdoor config.php!

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        3
        ·
        18 hours ago

        I do it in docker at home, for myself, in an environment I am okay with accidentally destroying - and even then I have nightly backups of the volumes.

        In a professional system, as mentioned in my other comment, I would simply just do it in a VM with the disk scheduled also for nightly backups. Nextcloud just hardcoded too many things dependent on thinking the underlying system was mutable. Unfortuantely that’s just the easiest way to handle it.

        However, also as mentioned, if I were in a professional environment, I’d have to really look at the cost for all of that infrastructure and my time to run it - and decide if I really thought I could run it myself with all of that overhead, and that it would still make sense compared to just doing google docs or something. Remember it’d be my ass on the line, as OP is learning

    • Moonrise2473@feddit.itOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      I wiped a whole drive (luckily it was filled with a redundant backup) with the docker image, as the behavior was (or still is, don’t know if it was fixed) to rm -rf . and replace with fresh stuff if occ isn’t found. So in the docker compose I accidentally mistyped the wrong volume as /mnt/disk2 instead of /mnt/disk3 and it erased it

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        3
        ·
        19 hours ago

        Oh yeah, if you’re in a professional environment, I’m sorry but that’s just not great. The only way I’d consider running Nextcloud professionally would be on a VM of it’s own with nightly disk backups, with blob storage as the backing - and even then with the cloud costs really how close are you to just paying for an enterprise license to Google or Microsoft? Plus the headache of not having to worry about it yourself