• 1 Post
  • 37 Comments
Joined 9 months ago
cake
Cake day: October 16th, 2023

help-circle
  • “Shared network folder” in Jellyfin doesn’t do what you think it does. 😛 I agree it’s rather confusing. It’s just a convenient link to a Windows share which you can open from the Jellyfin app if you want to browse the files and they happen to also be shared as a Windows share. It’s NOT where Jellyfin takes the files from.

    Jellyfin can only index files accessible to it locally. Share the files from TrueNAS to the machine or container running Jellyfin, then point Jellyfin to the directory where you mounted the share. I recommend NFS rather than Samba for this purpose.


  • What does “mediaserver” mean to you? Synology are good for storage but not so great for more CPU intensive stuff, plus of course they’re not freely upgradeable and you’re tied to their OS.

    If you’re comfortable building your own PC you can install Unraid or TrueNAS which will give you an easy to use admin interface and the ability to use/upgrade with off-the-shelf components. /r/buildapc can probably help with that.

    If you’re also comfortable with Linux you can design your own fine-grained approach to the OS and the apps on it, /r/selfhosted can probably help with that.

    SSD’s are getting there in $$$/TB but have a bit more to go to catch up to HDDs.

    Your approach of having multiple backup drives is sound. Having everything in one place means all eggs in one basket. Keep that in mind when you reorganize your data.


  • Same, except I also use Scrutiny to flag drives for my attention. It makes educated guesses for a pass/fail mark, using analysis of vendor-specific interpretations of SMART values, matched against the failure thresholds from the BackBlaze survey. It can tell you things like “the current value for the Command Timeout attribute for this drive falls into the 1-10% bracket of probability of failure according to BackBlaze”.

    It helps me to plan ahead. If for example I have 3 drives that Scrutiny says “smell funny” it would be nice if I had 2-3 spares on hand rather than just 1. Or if two of those drives happen to be together in a 2-pair mirror perhaps I can swap one somewhere else.









  • Sellers usually balk at running a long test unfortunately. Sometimes they do it proactively and show you SMART data with a recent long test log already included but it’s very seldom.

    Many sellers aren’t technically savvy and it’s the first time they hear about Hard Disk Sentinel, they give you pics of the computer monitor taken with their phone etc. I consider it a win if they can manage to show you the complete SMART attributes.


  • Don’t self-host email SMTP or public DNS. They’re hard to set up properly, hard to maintain, easy to compromise and end up used in internet attacks.

    Don’t expose anything directly to the internet if you’re not willing to constantly monitor the vulnerability announcements, update to new releases as soon as they come out, monitor the container for intrusions and shenanigans, take the risk that the constant updates will break something etc. If you must expose a service use a VPN (Tailscale is very easy to set up and use.)

    Don’t self-host anything with important data that takes uber-geek skills to maintain and access. Ask yourself, if you were to die suddenly, how screwed would your non-tech-savvy family be, who can’t tell a Linux server from a hot plate? Would they be able to keep functioning (calendar, photos, documents etc.) without constant maintenance? Can they still retrieve their files (docs, pics) with only basic computing skills? Can they migrate somewhere else when the server runs down?



  • I think it really depends on what you intend to do with it… Many answers here will mention what they use but not why.

    In my case I want to have various services installed in docker containers, and I have the skills to manage Linux in console. A very simple solution for me was to use a rock-solid, established Linux distro on the host (Debian stable) with Docker sourced from its official apt repo. It’s clean, it’s simple, it’s reliable, it’s easy to reinstall if it explodes.

    Why containers (as opposed to directly on the host)? I’ve done both over several years and I’ve come to consider the container approach cleaner. (I mention this because I’ve seen people wondering why even bother with containers.) It’s a nice sweet spot in-between dumping everything on the host and a fully reproducible environment like nixOS or Ansible. I get the ability to reproduce a service perfectly thanks to docker compose; I get to separate persistent data very cleanly thanks to container:host mapping of dirs and files; I get to do flexible networking solutions because containers can be seen as individual “machines” and I can juggle their interfaces and ports around freely; I get some extra security from the container isolation; it’s less complicated than using VMs etc.






    • Get a cheap VPS.
    • Get a domain name and point its A record to the IP of the VPS.
    • Set up a VPN tunnel between the VPS and your home server. You can use Tailscale or wg-easy. You don’t need to worry about CGNAT because you’re establishing the VPN by going out of your server (either through Tailscale or to the VPS IP with wireguard).
    • Port-forward 443 on the VPS public IP through the tunnel to a reverse proxy running on the home server (NPM, Caddy, Traefik etc.)
    • Get a Let’s Encrypt wildcard TLS certificate for *.yourdomain.tld.
    • Set up the reverse proxy to use the TLS certificate for immich.yourdomain.tld and point it at your immich container.

  • I use Cloudflare tunnels because they are a good way of exposing sites to the internet without exposing my IP

    What difference does that make? I only ever heard one realistic reason for hiding your IP, which was a guy living in a suburban neighborhood with static IPs where the IP indicated his house almost exactly.

    If you have a dynamic IP it will get recycled. If you get a static IP it will eventually get mapped to your precise location, Google & other big data spend a lot of time doing exactly that.

    or opening ports […] or other attacks

    If your services are accessible from the internet they are accessible… doesn’t matter that you don’t open ports in your local LAN, there’s still an ingress pathway, and encrypting the tunnel doesn’t mean your apps can’t get hacked.

    I don’t have to worry as much about DDoS

    How many DDoS’s have you been through? Lol. CF will drop your tunnel like a hot potato if you were ever targeted by a DDoS. If you think your $0/month plan is getting the same DDoS protection as the paid accounts you’re being super naive. Let me translate this page for you: your DDoS mitigation for $0/mo amounts to “basically nothing”. Any real mitigation starts with the $200/mo plan.


  • I’m partial to the DIY PC option because it allows far more flexibility. If you can swing the space for the larger box IMO it’s the best way to go.

    Some things to keep in mind when speccing the box:

    • Some PCIe slots can come in extremely handy down the line. There’s an amazing variety of expansion cards that can save your butt when you decide to do something you haven’t foreseen.
    • Consider how many HDDs you’d like to have. This will determine the case size as well as how many SATA connectors you need to get.
    • Get an Intel CPU at least gen6 because they have GPU with hardware transcoding built-in.
    • Get at least one M.2 slot, to be able to install the OS on a NVMe SSD and not take up a SATA connector. Read the motherboard specs though, some of them disable a SATA connector anyway if you use the M.2 slots in a certain way.
    • You can run a server on RAM as low as 4 GB. You actually don’t need very high RAM if you don’t intend to run VMs or ZFS.

    Are you familiar with any Linux distro in particular? I would strongly recommend using Docker rather than native regardless of distro.