I saw this post and I was curious what was out there.
https://neuromatch.social/@jonny/113444325077647843
Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?
One option that I’ve heard of in the past
ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.
That looks useful, I might host that. Does anyone have an RSS feed of at risk data?
This seems pretty cool. I might actually host this.
Going to check that out because…yeah. Just gotta figure out what and where to archive.
NOAA is at risk I think.
Everything is at risk.
I use M-Discs to long term archival.
I don’t self-host it, I just use archive.org. That makes it available to others too.
There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?
It’s a single point of failure though.
Yes. This isn’t something you want your own machines to be doing if something else is already doing it.
But then who backs up the backups?
Realize how how much they are supporting and storing.
Come back to the comments after.
I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).
Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.
Flash drives and periodic transfers.