My current setup is an Odroid H3+ with 2x8TB hard drives configured in raid 1 via mdadm. It is running Ubuntu server and I put in 16GB of RAM. It also has a 1TB SSD on it for system storage.

I’m looking to expand my storage via an attached storage drive enclosure that connects through a USB 3 port. My ideal enclosure has 4 bays with hardware raid support, which I plan to configure in raid 10.

My main question: How do I pool these together so I don’t need to balance between the two? Ideally all binded to appear as a single mount point. I tried looking at mergefs, but I’m not sure that it would work.

I ask primarily because I write scripts which moves data to the current raid 1 setup. As i expand the storage, I want it to continue working without worrying about checking for space between the two sets.

Be real with me - is this dumb? What would you suggest given my current stack?

  • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    10 months ago

    I wouldn’t mix SW and HW raid, if you can, try to stick with just one or the other. By mixing them you’ll open yourself up to a whole can of worms if there’s a disk failure IMO

    If this was BTRFS (or to a lesser extent, ZFS), then the solution would be to chuck the external USB enclosure into JBOD mode, and just add the disks to the array, where the storage will grow automatically in relation to the newly added disk, and you wouldn’t need to change your script…

    Since you’re using mdadm, I believe you’ll need to add two new disks, but I’m not too sure if you can expand the space on the existing mountpoint without using a filesystem merging tool

    Edit: accidentally submitted early

    • numbness3416OP
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      Thanks for the ideas - at first i was thinking i could simplify by having the disk enclosure mount as a single drive and then I would just need to merge my existing software setup. Im getting pretty unanimous feedback that this is not a good idea.

      I’ll dive deeper into mdadm’s documentation and see if I can do some magic here. I realize it’s not the most elegant solution, but I’d really prefer keeping my existing setup and add to it.

      Thanks again for your input, I appreciate it!

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    You want to pool 2 drives managed by software RAID with 4 other drives in an external enclosure managed by hardware RAID, connected via USB?

    Yes, this is a terrible idea, as you already suspect. If you even get it to work, it will be unstable and the USB bus will be a nasty bottleneck for data transfer.

    For less than the price of a good 4-bay NAS you could get a used Dell PowerEdge server. If that is outside your price range, consider one of the PowerEdge towers.

    This would be a significantly more stable and reliable setup, and offers room for future expansion. In addition, if data hoarding is the goal, you would be able to use an OS like TrueNAS which would give you better control over your drive pool(s).

    • numbness3416OP
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      Hey - thanks for the input and suggestions. Part of my idea is because of my desire to not rebuild my whole stack but I see that I likely will need to. Part of the fun in this hobby I suppose 🙂

      Really appreciate it and thanks again.

      • constantokra@lemmy.one
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        You don’t need to. I like mergerfs a lot, especially paired with snapraid. It all depends what you plan on using the storage for. Does it need to be blazing fast?

        I definitely wouldn’t mix hardware and software raid though. You can always load your data onto a new mergerfs pool on the new drives, of there’s no other way, then add your old drives to the pool. I imagine it’s not necessary, but i’ve only ever started with an empty mergerfs array and added data to it, so I wouldn’t know how to tell you to do it.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 months ago

          I see this:

          Supports addition and removal of devices with no rebuild times

          But also this:

          MergerFS has zero fault tolerance - if the drive that data is stored on fails, that data is gone.

          ref

          So… what happens when the USB cable gets bumped mid-write, and the drives in the external enclosure suddenly go offline? Because the cable will get bumped at some point, probably at the worst possible time.

          Genuine question, I don’t have experience with MergerFS. OP’s planned setup seems fault-prone to me, rather than fault-resistant.

          • constantokra@lemmy.one
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            I’ve not had it happen, but I imagine it’d be the same as if a sata drive failed. There’s no fault tolerance, as you pointed out. My understanding is that each drive has the same directories, and the pool shows all the files from all drives. If a drive goes offline those files should disappear.

            I use snapraid to add fault tolerance. Really basically it takes a snapshot of your files and you can recover back to that snapshot if one drive fails. You might think you’d run into a problem if a drive failed, because it might just think you’d deleted a bunch of files, but I believe the default behavior is that it throws an error and notifies you if you have deleted more than a certain threshold of files. That might not be built into snapraid. It might be part of snapraid runner, which I would recommend you use anyway to make it easier to deal with.

            So basically, you’d notice your files disappeared, or a cron job would notice, or snapraid would notice, then you’d go plug the drives back in.

            I get the concern, but if you’re that concerned about reliability then you should probably use some commercial product that won’t require much know how or intervention.

            I’m loving the flexibility of mergerfs, snapraid, and a diy nas. When I run out of physical space I’ll likely just add a few drives in a USB enclosure, so I definitely wouldn’t try to persuade you not to.