![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/f4a8a699-27b5-406c-a15c-aa96a0acf5a9.webp)
The latest update for FUTO keyboard just added this exact feature :)
The latest update for FUTO keyboard just added this exact feature :)
So far everything has been very lacklustre. This update just got announced. Maybe it will be better?
There is an issue with your database persistence. The file is being uploaded but it’s not being recorded in your database for some reason.
Describe in detail what your hardware and software setup is, particularly the storage and OS.
You can probably check this by trying to upload something and then checking the database files to see the last modified date.
I like this version of fedora atomic with KDE
If you are willing to spend a bit more upfront, I bought a mini PC in 2017 and installed opnsense on it. It’s still rock solid. For wifi, I use a separate ap (a ubiquity UAP that I bought in 2015) and it is also going strong. Almost a decade of rock solid performance easily beats out any other router I’ve owned in terms of both performance and cost.
I have an atomic variant of fedora 40 (Aurora) and it just works on an Intel CPU with integrated graphics. I have a USB c dongle with HDMI out and it just works when I plug it in.
I also tried it on my steam deck dock the other day and it worked without issue.
Thanks! Makes sense if you can’t change file systems.
For what it’s worth, zfs let’s you dedup on a per dataset basis so you can easily choose to have some files deduped and not others. Same with compression.
For example, without building anything new the setup could have been to copy the data from the actual Minecraft server to the backup that has ZFS using rsync or some other tool. Then the back server just runs a snapshot every 5 mins or whatever. You now have a backup on another system that has snapshots with whatever frequency you want, with dedup.
Restoring an old backup just means you rsync from a snapshot back to the Minecraft server.
Rsync only needed if both servers don’t have ZFS. If they both have ZFS, send and recieve commands are built into zfs are are designed for exactly this use case. You can easily send a snap shot to another server if they both have ZFS.
Zfs also has samba and NFS export built in if you want to share the filesystem to another server.
I use zfs so not sure about others but I thought all cow file systems have deduplication already? Zfs has it turned on by default. Why make your own file deduplication system instead of just using a zfs filesystem and letting that do the work for you?
Snapshots are also extremely efficient on cow filesystems like zfs as they only store the diff between the previous state and the current one so taking a snapshot every 5 mins is not a big deal for my homelab.
I can easily explore any of the snapshots and pull any file from and of the snapshots.
I’m not trying to shit on your project, just trying to understand its usecase since it seems to me ZFS provides all the benefits already
Start with this to learn how snapshots work
https://fedoramagazine.org/working-with-btrfs-snapshots/
Then here the learn how to make automatic snapshots with retention
https://ounapuu.ee/posts/2022/04/05/btrfs-snapshots/
I do something very similar with zfs snapshots and deduplication on. I have one ever 5 mins and save 1 hr worth then save 24 hourlys every day and 1 day for a month etc
For backup to remote locations you can send a snapshot offsite
Fresh RSS if you want a self hosted option
This is really amazing! In theory, can you can use 2gb with 4 different VMs?
Conversely I have a dell xps from 2018 that run very well with fedora atomic (kde). I upgraded the SSD, WiFi card and replaced the battery. Should easily last me another 5 years
So Fedora atomic?
There’s like a dozen variants as well to suit any specialty application
Thanks for this list!
The proper way of doing this is to have two separate systems in a cluster such as proxmox. The system with GPUs runs certain workloads and the non GPU system runs other workloads.
Each system can be connected (or not) to a ups and shut down with a power outage and then boot back up when power is back.
Don’t try hot-plugging a gpu, it will never be reliable.
Run a proxmox cluster or kubernetes cluster, it is designed for this type of application but will add a fair amount of complexity.
First thing I would do is boot a live Ubuntu image from a USB. Make sure the hardware all works as expected.
Looks great, any chance you’d be interested in getting it a Jellyfin backend connection?
Seafile works very well for this. They have a traditional sync app for desktop (seafile client) and they also have on demand file browser that let’s you make some file local if you want (seadrive). Its quite slick.
And good resources on how to learn to use Toolbox properly?
Is this useful for a homelab with a current setup of 1 physical host -> proxmox -> alpine VM -> docker?
Docker is managed by portainer that pulls docker compose files from a git repo. Around 30 containers total in 10 or so stacks.
This idea looks really interesting but it seems to be mostly for kubernetes deployments