Here is a fairly robust way to ensure a drive safe to put into service. I have tested this before and caught drives that would have failed shortly after put into prod, and some that would of after it was more than half full.

  1. Check S.M.A.R.T Info: Confirm no (0) Seek Error Rate, Read Error Rate, Reallocated Sector Count, Uncorrectable Sector Count

  2. Run Short S.M.A.R.T test

  3. Repeat Step 1

  4. Run Conveyance S.M.A.R.T test

  5. Repeat Step 1

  6. Run Destructive Badblocks test (read and write)

  7. Repeat Step 1

  8. Perform a FULL Format (Overwrite with Zeros)

  9. Repeat Step 1

  10. Run Extended S.M.A.R.T test

  11. Repeat Step 1

Return the drive if either of the following is true:

A) The formatting speed drops below 80MB/s by more than 10MB/s (my defective one was ~40MB/s from first power-on)

B) The S.M.A.R.T tests show error count increasing at any step

It is also highly advisable to stagger the testing (and repeat some) if you plan on using multiple drives in a pool/raid config. This way the wear on the drives differ, to reduce the likelihood of them failing at the same time. For example, I re-ran either the Full format or badblocks test on some of the drives so some drives have 48 hours of testing, some have 72, some have 96. This way, the chances of a multiple drive failures during rebuild is lower.

  • C-3H_gjP@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Jeez you’re buring through so much of the drive’s lifespan just checking the damn thing. If a failed drive will cause problems worthy of this amount of burn-in time you need a more robust setup.

    I run all used ebay drives. Except for a glance at the smart data before addng them to the array I don’t test them at all. Just keep an extra drive or two on hand as spares. Life’s easier when you plan for failure instead of fighting it.

    • GolemancerVekk@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Same, except I also use Scrutiny to flag drives for my attention. It makes educated guesses for a pass/fail mark, using analysis of vendor-specific interpretations of SMART values, matched against the failure thresholds from the BackBlaze survey. It can tell you things like “the current value for the Command Timeout attribute for this drive falls into the 1-10% bracket of probability of failure according to BackBlaze”.

      It helps me to plan ahead. If for example I have 3 drives that Scrutiny says “smell funny” it would be nice if I had 2-3 spares on hand rather than just 1. Or if two of those drives happen to be together in a 2-pair mirror perhaps I can swap one somewhere else.

  • danni3boi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    What program are you using to run those tests? Is it usable on windows 10? Thanks for putting your guide up!

  • HTWingNut@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Way Overkill.

    Single pass read (SMART test is fine) and single pass write (ones, zeros, random, whatever you want) is more than adequate to determine any issues a new disk may have out of the gate, unless you want to isolate a fringe case condition and waste time and wear on your hard drive doing so.

    • Oinkvote@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      For real. I suppose if you kept one single copy of the drive you’d want to really, really make sure? But then again why would you keep one copy of anything?

      TLDR: smart is smort enuf

    • binaryriot@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I do it the other way around: first write (zero wipe), then read (SMART long test). Served me well for many disks. :)

    • GolemancerVekk@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Yeah I was under the impression these two attributes vary so wildly between vendors that they’re basically void of meaning by now.

  • pongpaktecha@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    A single full read, and full write test should be plenty. Drives tend to fail really early on or don’t fail at all until eol

  • kon_dev@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I guess having a backup and error correcting file systems like ZFS or BTRFS will help you more long term. Sure, watch fir Smart values, but imho don’t go over board with tests. I do a extended smart test, rebuild/extend my RAID, check a quick smart test again and that’s it. Drives can die at any time, even if they were fine after a long test cycle. The 3-2-1 rule should save you from data loss.

  • Glittering_Earth_394@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    How about if I have already filled the new hard drive (still have the data on the source drives) and just want to make sure all of it is readable (before erasing the data from the source drive), without having to copy all the data from the new drive ?

  • smoke007007@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    But why do all this if using raid with hot spare? If a new drive fails, just replace it once detected that it failed?