Hi, so I have a very individual homelab. It’s a collection of stuff accumulated over nearly 30 years of doing weird stuff.

For the past 9 years it’s been running as a bunch of lxc containers (privileged because unprivileged did not exist, back then) but several of those containers are p2v conversions of physical hosts dating back to debian woody and earlier. They’re all upgraded to at least buster, most are bookworm. Stuff like asterisk, email, home assistant, nextcloud, matrix synapse run there these days.

The server is a 15 year old HP gen6 thing, and is getting quite long in the tooth. There’s also a dedicated cheapy microserver with an i4 running opnsense on bare metal as a firewall.

Trying to run stuff like local voice stuff for home assistant is showing the HP’s age quite badly. Also, our area is getting fibre, and the opnsense box is maxed out at gigabit. More speed would be nice.

So, I’m in two minds. The homelab has been a lot of fun over the years, but I’m over 50 now, I want lower maintenance. This latest wave of upgrades is making me rethink the next 20 years of homelab. I don’t want to leave something stupidly “only me” if I were to die tomorrow (diabetes is a fickle bastard). My wife might want to try and carry on this thing - it runs some useful stuff around the house (but it should be noted that nothing in this house requires a server or cloud) - and that’s not going to happen with the current solution.

I think I might have a path, using proxmox, from where I am now, to something that can be deployed on e.g. a bunch of ms01 class devices. I’m thinking to convert the existing HP server to proxmox, to allow me to redeploy all my existing lxc containers into the proxmox world. As I acquire hardware over the next year, I can look at a k8s migration of the services onto a small, MUCH lower power cluster. One of the keys is that I don’t want to have big outages of services for days or weeks while I migrate everything so it’s gotta be a rolling upgrade as it were.

I’m here soliciting feedback. Has anyone ever migrated from a deeply legacy homebrew homelab into something like this? Does it reduce the workload long term? What’s the practicality of this for someone rather less tech savvy?

Thanks!

    • cpwOP
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Not seen that one before, but I’m familiar with the concept. I’m working on something for myself that’ll go into our will prep when we finally get around to it.

  • BearOfaTime@lemm.ee
    link
    fedilink
    arrow-up
    6
    ·
    4 months ago

    I’m currently migrating all sorts of stuff to Proxmox.

    Nice thing is, VM’s and containers are easily copied with systems off, even did a P-to-V of an ancient Win7 machine and am reusing that hardware for Proxmox, and will run the VM in Proxmox until I get everything cleaned up and restructured.

    Proxmox is a beast.

  • orb360
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    I migrated from a mix of proxmox, hyper-V, bare metal, and Synology hosted docker onto a full k8s cluster.

    It is much easier to manage now, including adding or replacing nodes. Including a rebuild of the cluster from 7 rpis onto 7 elite desk mini PCs. (From arm to x86 and from Debian to Talos)

    But it wasn’t a small process either.

    You’ll have to deploy your k8s cluster, learn how to host the services you want (using a load balancer, dns setup, cluster IPs, etc), and setting up a storage provider (I use NFS to my Synology share, not the fastest or most secure but easiest)

    And then you’ll need to migrate your services off the old hardware onto the cluster one by one… Which means learning docker and k8s and how they work together.

    There are some things that I cannot host on the cluster like zwave2mqtt which requires a physical location centralized in my house and access to a USB zwave adapter. So even then not quite 100% ended up on the cluster, it runs on docker on an rpi though. (Technically you can do this if you pin the container to a single host and pass through the USB device, but I didn’t see a reason for it.)

    But, service upgrades, adding new services now that I’m used to it is very easy… Expanding compute is also pretty easy. So maintenance has gone down a bunch. But it was also a decent amount of work and learning to get there.

    K8s is relatively specialized knowledge compared to the general computer literate population that knows how computers generally work… So in terms of someone being able to take over your work, if they already know k8s, then it would be reasonably easy. If they don’t but are savvy enough to learn it would take a bit but not be too bad. If someone doesn’t already know their way around Linux and a terminal, it would probably not be possible for them to pick it up in a reasonable amount of time though.

    • shankrabbit@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Any tips you can give for someone who is running k8s on rpi4s and wants to switch architectures? Sounds like you did something similar and while my rpis are holding strong, I want something with a little more power like a few N100 based micro pcs.

      • orb360
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        All the images I used already had x86 variants available. In fact, I was building and pushing my own arm variants for a few images to my own Nexus repository which I’ve stopped since they aren’t necessary anymore.

        If you are using arm only images, you’ll need to build your own x86 variants and host them.

        I created a brand new cluster from scratch and then setup the same storage pv/PVCs and namespaces.

        Then I’d delete the workloads from the old cluster and apply the same yaml to the new cluster, and then update my DNS.

        I used kubectx to swap between them.

        Once I verified the new service was working I’d move to the next. Since the network storage was the same it was pretty seamless. If you’re using something like rook to utilize your nodes disks as network storage that would be much more difficult.

        After everything was moved I powered down the old cluster and waited a few weeks before I wiped the nodes. In case I needed to power it up and reapply a service to it temporarily.

        My old cluster was k8s on raspbian but my new one was all Talos. I also moved from single control plane to 3 machines control plane. (Which is completely unnecessary, but I just wanted to try it). But that had no effect on any services.

    • BearOfaTime@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      4 months ago

      How would you compare Proxmox to Kubernetes?

      I’m currently running a hypervisor lab to test stuff for friends in the SMB IT space to find a replacement for VMWare. At the moment, Proxmox has the best cost/flexibility/ease of learning, but if Kubernetes is more mature, has better support, that would be a great argument for it.

      • __init__@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        Proxmox is going to be a lot easier to pick up if you’re coming from vmware. Kubernetes is a beast with a considerable learning curve so if you’re not familiar with it already then I wouldn’t recommend it for a lab environment (unless the goal is specifically to learn it).

        • BearOfaTime@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Yea, the lab is to test for a VMware replacement, so I’ll start tinkering with Kubernetes along with Proxmox and a couple others.

    • __init__@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      I’ve migrated most of my lab from a mess of proxmox lxcs over to k3s (I use k8s at work), except for home assistant. I’ve been back and forth on that one. I really like being able to back up the entire vm before running updates or whatever. Could you use a node selector to force zwave or zigbee or whatever to run on the node that has the usb device? Or is it still a pain in the ass that way cause you have to know the path on the specific host… I haven’t tried that yet.

      • orb360
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        You can pin the pod to a specific node and pass through the USB device path and that will work. But the whole point of k8s is redundancy and workloads running anywhere.

        Plus for IOT networks like zigbee and zwave, controller position in your house is important. If your server is more centrally located that may not be a concern for you.

        I’ve heard of some using a USB serial over Ethernet device to relocate their controller remotely but i haven’t looked into that. Running this one off rpi for the controller just made more sense for me.

        • __init__@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Makes sense, thanks. Yeah idk about usb serial over Ethernet, it’s an interesting idea but I wouldn’t want to introduce more moving parts (and/or latency) to the network.

    • cpwOP
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      I’ve seen a few people who run proxmox on the bare metal with k8s running inside VMs, or containers, inside proxmox. I’m not sure if I should just go full bare metal k8s or have the proxmox (or other?) intermediate layer…

      • orb360
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        You can run it on proxmox if you want to mix non k8s machines onto the same hardware. All my k8s nodes are dedicated to running k8s only though, so there is no reason for me to have that extra step.

        I would not run k8s on proxmox so you can run multiple nodes on the same machine though, the only reason I could really see to do that is if you only had one machine and you really wanted to keep your controller and worker nodes separate.

  • poVoq@slrpnk.net
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    Honestly, if you want to simplyfy things and reduce maintenance burden, just use Debian stable.