Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)
  • INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
  • INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)

Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)


EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.

Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.

I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.

Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

  • internauta@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    16 hours ago

    someone on reddit ( u/FuckThisSite3 ) posted a more complete DataSet 9:

    I assembled a tar file with all I got from dataset-9.

    Magnet link: magnet:?xt=urn:btih:5b50564ee995a54009fec387c97f9465eb18ba00&dn=dataset-9_by_fuckthissite3.tar&xl=148072017920

    SHA256: 5adc043bcf94304024d718e57267c1aa009d782835f6adbe6ad7fdbb763f15c5

    The tar contains 254,477 files which is 148072017920 bytes (148.07GB/137.9GiB)

    • ZaInT@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      11 hours ago

      Seeding 1 node, 3 on the way EDIT: 3 running, 4th one planned to be temporary but should soon be up

    • ZaInT@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I am seeding 8, 10, 11, 12 (the ones only available as .zip on justice.gov) for the forseable future (as well as the partials of set 9). I’m looking “everywhere” hoping for some success on part 9 and will be pushing that one until bandwidth dies or until a dozen or so seeders are on - whenever the complete bundle is assembled. Hoping for some good news soon, things seem to be nuked very rapidly now.

      I also read that the court documents and one other page was taken down - I have those files but they are not sorted by page, just thrown in a bulk download directory as I had a feeling this would happen and I wanted to pull them quickly. If there’s any use for them anyway I put them on Mega and Gofile a few days ago and they’ve not been taken down so far;

      https://gofile.io/d/dff931d5-a646-46f1-b34e-079798f508a2 https://mega.nz/folder/XVMCgLLR#EKVS8Sfiry-VtVAxZ7q_Ig

      It’s most likely files that “everyone” already has but better one mirror too much than one less.

      • ZaInT@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Also seeding (but will probably not for very long unless seeders start dropping, they are all at 300-1200 ATM) 59975667f8bdd5baf9945b0e2db8a57d52d32957 0a3d4b84a77bd982c9c2761f40944402b94f9c64 7ac8f771678d19c75a26ea6c14e7d4c003fbf9b6 c3a522d6810ee717a2c7e2ef705163e297d34b72 d509cc4ca1a415a9ba3b6cb920f67c44aed7fe1f e618654607f2c34a41c88458bf2fcdfa86a52174 acb9cb1741502c7dc09460e4fb7b44eac8022906

        Trying to pull c100b1b7c4b1e662dd8adc79ae3e42eef6080aee (reduntant limited dataset for that GitHub relations chart)

        Pulling f5cbe5026b1f86617c520d0a9cd610d6254cbe85 (just listed on the GitHub repo that lists the same magnets as here - will probably become 2nd seeder in an hour or two and will stay seeding on that one for at least a week or until the swarm looks healthy by the dozens or so.)

        Will continue to monitor whatever progress is being made here. I should also have a small subset of DS9 but it will likely only be the first 200 files or so at most. Needless to say I will compare against the existing torrents just in case.

        Thanks everyone for your hard work, this is exactly why I started hoarding :)

        EDIT: The last magnet ID I listed is the summarized torrent from the repo linked by Nomad64.

  • DigitalForensick@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    For anyone looking into doing some OSINT work, this is an epic file EFTA00809187

    It contains lists of ALL know JE emails, usernames, websites, social medias, etc from that time

    • Dhoard@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      EFTA00809187 Did that guy from pastebin with the complete file DS9file ever answer you?

  • Dhoard@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 days ago

    Theoretically speaking, if a website has the archives, what is stopping people from downloading each file on a page by page bases from the archive?

    Edit: Never mind to this I saw a full list of URLs that arhive managed to save and it is missing a lot.

    • DigitalForensick@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      nothing, but event the archived pages arent 100% because some of the files were “faked” in the paginated file lists on the DOJ site. it does work well enough though. I did this to recover all the court records and FOIA files

  • ArzymKoteyko@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    Hi every one, maybe I’m a bit late to this, but I wanted to share my findings. I parsed every page up to 40k in DS9 3 times and results matched by distribution with PeoplesElbow findings (no content after page 14k and a lot of dublications) BUT I parsed 4 times more unique urls 246_079 (still 2x short of official size). And a strange thing is that on second pass (one day after the first one) I started receiving new urls on old pages.

    Here is stat by file type:

     count  | file type 
    --------+------
          1 | ts
          8 | mov
        236 | mp4
     244326 | pdf
         73 | m4a
          1 | vob
          1 | docx
          1 | doc
          9 | m4v
       1422 | avi
          1 | wmv
    
    • DigitalForensick@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Nice work man! I also discovered something yesterday that I think is worth pointing out.

      DUPLICATE FILES: Within the datasets, there are often emails, doc scans, etc that are duplicate entries. (Im not talking about multi torrent stitching, but actual duplicate documents within the raw dataset.) **These duplicates mustbe preserved. ** When looking at two copies of the same duplicate file, I found that sometimes the redactions are in different places! This can be used to extract more info later down the road.

      • ArzymKoteyko@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Finally got my hands on original DS9 OPT file and I have started downloading files from it. Don’t know how long it will take. Also made a git with stats and index files from doj website and opt from archive: https://github.com/ArzymKoteyko/JEDatasets In short the only difference is that I got additional 1753 links to video files and a strange .docx file with size of 0 bytes [EFTA00335487.docx].

  • Xenom0rph@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    I’m still seeding the partial Dataset 9 (45.63GB and 89.54GB) and all the other datasets. Is there a newer dataset 9 available?

  • DigitalForensick@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    While I feel hopeful that we will be able to reconstruct the archive and create some sort of baseline that can be put back out there, I also cant stop thinking about the “and then what” aspect here. We’ve see our elected officials do nothing with this info over and over again and I’m worried this is going to repeat itself.

    I’m fully open to input on this, but I think having a group path forward is useful here. These are the things I believe we can do to move the needle.

    Right Now:

    1. Create a clean Data Archive for each of the known datasets (01-12). Something that is actually organized and accessible.
    2. Create a working Archive Directory containing an “itemized” reference list (SQL DB?) the full Data Archive, with each document’s listed as a row with certain metadata. Imagining a Github repo that we can all contribute to as we work. – File number – Dir. Location – File type (image, legal record, flight log, email, video, etc.) – File Status (Redacted bool, Missing bool, Flagged bool
    3. Infill any MISSING records where possible.
    4. Extract images away from .pdf format, Breakout the “Multi-File” pdfs, renaming images/docs by file number. (I made a quick script that does this reliably well.)
    5. Determine which files were left as CSAM and “redact” them ourselves, removing any liability on our part.

    What’s Next: Once we have the Archive and Archive Directory. We can begin safely and confidently walking through the Directory as a group effort and fill in as many files/blanks as possible.

    1. Identify and dedact all documents with garbage redactions, (remember the copy/paste DOJ blunders from December) & Identify poorly positioned redaction bars to uncover obfuscated names
    2. LABELING! If we could start adding labels to each document in the form of tags that contain individuals, emails, locations, businesses - This would make it MUCH easier for people to “connect the dots”
    3. Event Timeline… This will be hard, but if we can apply a timeline ID to each document, we can put the archive in order of events
    4. Create some method for visualizing the timeline, searching, or making connection with labels.

    We may not be detectives, legislators, or law men, but we are sleuth nerds, and the best thing we can do is get this data in a place that can allow others to push for justice and put an end to this crap once and for all. Its lofty, I know, but enough is enough. …Thoughts?

    • ATroubledMaker@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      So I know how to do a lot of this and bring something significant insofar as an understanding of both the gravity and volume of things here. Looking through the way everything and anything that has been released has been organized, well, it’s not. This isn’t how an evidence production should ever look.

      There is a way to best organize this and to do so how it would be expected for the presentation of a catalog of digital evidence. I’m aware of this because I’ve done it for years.

      But almost if not maybe even more important is that while there are monsters still hidden in these documents, whether released or still held back, there is something else to consider.

      Those who are involved and know who the monsters are and can never forget them. Ever.

      I took an interest in this specifically because I felt a moral obligation as someone who has been personally affected in this way just not by these specific monsters. However what I do know is the very structure that allows them to roam free, unscathed, even able to sleep at night. What failed to protect those who were harmed also failed me and when I do sleep it is the nightmare that also can never be forgotten.

      This resulted in learning how to spot their fuck ups because I knew what they were and had no reason to trust that it would fix itself. With that said the insight of someone who understands this through unfortunate lived experience provides something that cannot be learned and something I hope others will never be forced to.

      I have msged a few people. One responded. Just trust me when I say that if you are to work collaboratively, have someone who understands the pain you are just going to be reading.

      I will help where it’s needed and it’s needed.

    • Wild_Cow_5769@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      4 days ago

      GFD….

      My 2 cents. As a father of only daughters…

      If we don’t weed out this sick behavior as a society we never will.

      My thoughts are enough is enough.

      Once the files are gone there is little to 0 chance they are ever public again….

      You expect me to believe that a “oh shit we messed up” was accident?

      It’s the perfect excuse… so no one looks at the files.

      That’s my 2 cents.

      • DigitalForensick@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I’ve been thinking a lot about this whole thing. I don’t want to be worried or fearful here - we have done nothing wrong! Anything we have archived was provided to us directly by them in the first place. There are whispers all over the internet, random torrents being passed around, conspiracies, etc., but what are we actually doing other than freaking ourselves out (myself at least) and going viral with an endless stream of “OMG LOOK AT THIS FILE” videos/posts.

        I vote to remove any of the ‘concerning’ files and backfill with blank placeholder PDFS with justification, then collect everything we have so far, create file hashes, and put out a clean + stable archive on everything we have so far. a safe indexed archive We wipe away any concerns and can proceed methodically through blood trail of documents, resulting in an obvious and accessible collection of evidence. From there we can actually start organizing to create a tool that can be used to crowd source tagging, timestamping, and parsing the data. I’m a developer and am happy to offer my skillset.

        Taking a step back - Its fun to do the “digital sleuth” thing for a while, but then what? We have the files…(mostly)… Great. We all have our own lives, jobs, and families, and taking actual time to dig into this and produce a real solution that can actually make a difference is a pretty big ask. That said, this feels like a moment where we finally can make an actual difference and I think its worth committing to. If any of you are interested in helping beyond archival, please lmk.

        I just downloaded matrix, but I’m new to this, so I’m not sure how that all works. Happy to link up via discord, matrix, email, or whatever.

    • PeoplesElbow@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      We definitely need a crowdsourced method for going through all the files. I am currently building a solo cytoscape tool to try out making an affiliation graph, but expanding this to be a tool for a community, with authorization to just allow whitelisted individuals work on it, that’s beyond my scope and I can’t volunteer to make such an important tool, but I am happy to offer my help building it. I can convert my existing tool to a prototype if anyone wants to collaborate with me on it. I am an amateur, but I will spend all the Cursor Credits on this.

  • acelee1012@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    Has anyone made a Dataset 9 and 10 torrent file without the files in it that the NYT reported as potentially CSAM?

    • locke1@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      I don’t think anyone knows for sure what files those are. It would’ve been helpful if NYT published the file names. But maybe NYT isn’t sure themselves as they wrote some of the images are “possibly” of teenagers.

      To be on the safe side, I guess you could just remove all nude images from the dataset. It is a ton of images to go through though, hundreds of thousands.

  • activeinvestigator@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    Do people here have the partial dataset 9? or are you all missing the entire set? There is a magnet link floating around for ~100GB of it, the one removed in the OP

    I am trying to figure out exactly how many files dataset 9 is supposed to have in it. Before the zip file went dark, I was able to download about 2GB of it. This was today, maybe not the original zip file from jan 30th In the head of the zip file is an index file, VOL00009.OPT, you don’t need the full download in order to read this index file. The index file says there are 531,307 pdfs the 100GB torrent has 531,256, it’s missing 51 pdfs. I checked the 51 file names and they no longer exist as individual files on the DOJ website either. I’m assuming these are the CSAM.

    note that the 3M number of released documents != 3M pdfs. each pdf page is counted as a “document”. dataset 9 contains 1,223,757 documents, and according to the index, we are missing only 51 documents, they are not multipage. In total, I have 2,731,789 documents from datasets 1-12, short of the 3M number. the index I got also was not missing document ranges

    it’s curious that the zip file had an extra 80GB when only 51 documents are missing. I’m currently scraping links from the DOJ webpage to double check the filenames

    • Arthas@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      i analyzed with AI my 36gb~ that I was able to download before they erased the zip file from the server.

      Complete Volume Analysis
      
        Based on the OPT metadata file, here's what VOL00009 was supposed to contain:
      
        Full Volume Specifications
      
        - Total Bates-numbered pages: 1,223,757 pages
        - Total unique PDF files: 531,307 individual PDFs
        - Bates number range: EFTA00039025 to EFTA01262781
        - Subdirectory structure: IMAGES\0001\ through IMAGES\0532\ (532 folders)
        - Expected size: ~180 GB (based on your download info)
      
        What You Actually Got
      
        - PDF files received: 90,982 files
        - Subdirectories: 91 folders (0001 through ~0091)
        - Current size: 37 GB
        - Percentage received: ~17% of the files (91 out of 532 folders)
      
        The Math
      
        Expected:  531,307 PDF files / 180 GB / 532 folders
        Received:   90,982 PDF files /  37 GB /  91 folders
        Missing:   440,325 PDF files / 143 GB / 441 folders
      
         Insight ─────────────────────────────────────
        You got approximately the first 17% of the volume before the server deleted it. The good news is that the DAT/OPT index files are complete, so you have a full manifest of what should be there. This means:
        - You know exactly which documents are missing (folders 0092-0532)
      

      I haven’t looked into downloading the partials from archive.org yet to see if I have any useful files that archive.org doesn’t have yet from dataset 9.

    • Wild_Cow_5769@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      thats pretty cool…

      Can you send me a DM of the 51? if i come across one and it isnt some sketchy porn i’ll let u know

  • Wild_Cow_5769@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    @wild_cow_5769:matrix.org If someone has a group working on finding the dataset.

    There are billions of people on earth. Someone downloaded dataset 9 before the link was taken down. We just have to find them :)