• MTK@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    I generally agree, it won’t take long for SSDs to be cheap enough to justify the expense. HDD is in a way similar to CD/DVD, it had it’s time, it even lasted much longer than expected, but eventually technology became cheaper and the slightly cheaper price didn’t make sense any more.

    SSD wins on all account for live systems, and long term cold storage goes to tapes. Not a lot of reasons to keep them around.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      As a person hosting my own data storage, tape is completely out of reach. The equipment to read archival tapes would cost more than my entire system. It’s also got extremely high latency compared to spinning disks, which I can still use as live storage.

      Unless you’re a huge company, spinning disks will be the way to go for bulk storage for quite a while.

      • Marud@lemmy.marud.fr
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 hour ago

        Well, tape is still relevant for the 3-2-1 backup rule and I worked in a pretty big hosting company where you would get out 400 tb of backup data each weekend. it’s the only media allowing to have a real secured fully offline copy that won’t depend on another online hosting service

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          If you’re storing petabytes of data sure, but when a tape drive costs $8k+ (Only price I could find that wasn’t “Call for quote”), and only storing less than 500TB, it’s cheaper to buy hard drives.

          I’m not sure how important 2 types of media is these days, I personally have all my larger data on harddrives, but with multiple off-site copies and raid redundancy. Some people count “cloud” as another type of storage, but that’s just “somebody else’s harddrive”

    • Nomecks
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      Spinning platter capacity can’t keep up with SSDs. HDDs are just starting to break the 30TB mark and SSDs are shipping 50+. The cost delta per TB is closing fast. You can also have always on compression and dedupe in most cases with flash, so you get better utilization.

      • SaltySalamander@fedia.io
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 hours ago

        You can also have always on compression and dedupe in most cases with flash

        As you can with spinning disks. Nothing about flash makes this a special feature.

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          See for example the storage systems from Vast or Pure. You can increase window size for compression and dedup far smaller blocks. Fast random IO also allows you to do that ”online” in the background. In the case of Vast, you also have multiple readers on the same SSD doing that compression and dedup.

          So the feature isn’t that special. What you can do with it in practice changes drastically.

        • Nomecks
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 hours ago

          The difference is you can use inline compression and dedupe in a high performance environment. HDDs suck at random IO.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      24 minutes ago

      The disk cost is about a 3 fold difference, rather than order of magnitude now.

      These disks didn’t make up as much of the costs of these solutions as you’d think, so a disk based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.

      The market for pure capacity play storage is well served by spinning platters, for now. But there’s little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pcie generation.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      4 hours ago

      For servers physical space is also a huge concern. 2.5” drives cap out at like 6tb I think, while you can easily find an 8tb 2.5” SSD anywhere. We have 16tb drives in one of our servers at work and they weren’t even that expensive. (Relatively)

    • Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      It’s losing cost advantages as time goes. Long term storage is still on tape (and that’s actively developed too!), and flash is getting cheaper, and spinning disks have inherent bandwidth and latency limits. It’s probably not going away entirely, but it’s main usecases are being squeezed on both ends

  • hapablap@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    7 hours ago

    My sample size of myself has had 1 drive fail in decades. It was a solid state drive. Thankfully it failed in a strangely intermittent way and I was able to recover the data. But still, it surprised me as one would assume solid state would be more reliable. That spinning rust has proven to be very reliable. But regardless I’m sure SSD will be/are better in every way.

    • DSTGU@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      5 hours ago

      I believe you see the main issue with your experiences - the sample size. With small enough sample you can experience almost anything. Wisdom is knowing what you can and what you cant extrapolate to the entire population

      • fuckwit_mcbumcrumble@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        I have one HDD that survived 20+ years, and an aliexpress SSD that died in 6 months. Therefore all SSDs are garbage!!!

        That’s also the only SSD I’ve ever had fail on me and I’ve had them since 2011. In that same time I’ve had probably 4 HDDs fail on me. Even then I know to use data from companies like backblaze that have infinitely more drives than I have.

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    I’m about to build a home server with a lot of storage (relatively, around 6 or 8 times 12 TB as a ballpark), and I didn’t even consider anything other than spinning drives so far.

  • pr0sp3kt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 hours ago

    I had a terrible experience through all my life with HDDs. Slow af, sector loss, corruption, OS corruption… I am traumatized. I got 8TB NvMe for less than $500… Since then I have not a single trouble (well except I n electric failure, BTRFS CoW tends to act weird and sometimes doesnt boot, you need manual intervention)

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    13
    ·
    12 hours ago

    Probably at some point as prices per TB continue to come down. I don’t know anyone buying a laptop with a HDD these days. Can’t imagine being likely to buy one for a desktop ever again either. Still got a couple of old ones active (one is 11 years old) but I do plan to replace them with SSDs at some point.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    17 hours ago

    Haven’t they said that about magnetic tape as well?

    Some 30 years ago?

    Isn’t magnetic tape still around? Isn’t even IBM one of the major vendors?

    • n2burns
      link
      fedilink
      English
      arrow-up
      20
      ·
      15 hours ago

      Anyone who has said that doesn’t know what they’re talking about. Magnetic tape is unparalleled for long-term/archival storage.

      This is completely different. For active storage, solid-state has been much better than spinning rust for a long time, it’s just been drastically more expensive. What’s being argued here is that it’s not performant and while it might be more expensive initially, it’s less expensive to run and maintain.

        • thedeadwalking4242@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          6 hours ago

          Hard drives have longer shelf life than unpowered SSD. HDD are a good middle ground between SSD speeds, tape drive stability, and price they won’t go anywhere. The data world exists in tiers

          • enumerator4829@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            The flaw with hard drives comes with large pools. The recovery speed is simply too slow when a drive fails, unless you build huge pools. So you need additional drives for more parity.

            I don’t know who cares about shelf life. Drives spin all their lives, which is 5-10 years. Use M-Disk or something if you want shelf life.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 hours ago

          Right up until an EMP wipes out all our data. I still maintain that we should be storing all our data on vinyl, doing it physically is the only guarantee.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    33
    ·
    22 hours ago

    I’ll shed no tears, even as a NAS owner, once we get equivalent capacity SSD without ruining the bank :P

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      12 hours ago

      Considering the high prices for high density SSD chips…
      Why are there no 3.5" SSDs with low density chips?

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 hours ago

        Not enough of a market

        The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

        3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

        Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    6
    ·
    edit-2
    7 hours ago

    No shit. All they have to do is finally grow the balls to build SSD’s in the same form factor as the 3.5" drives everyone in enterprise is already using, and stuff those to the gills with flash chips.

    “But that will cannibalize our artificially price inflated/capacity restricted M.2 sales if consumers get their hands on them!!!”

    Yep, it sure will. I’ll take ten, please.

    Something like that could easily fill the oodles of existing bays that are currently filled with mechanical drives, both in the home user/small scale enthusiast side and existing rackmount stuff. But that’d be too easy.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      8 hours ago

      Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it’s driven primarily by the cost of the NAND chips, and you’d just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there’d be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

      Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

      The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I’ve seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn’t have takers.

    • Hozerkiller
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 hours ago

      I hope youre not putting m.2 drives in a server if you plan on reading the data from them at some point. Those are for consumers and there’s an entirely different formfactor for enterprise storage using nvme drives.

        • Hozerkiller
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          M.2 drives like to get hot and die. They work great until they don’t.

            • Hozerkiller
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              TBH i have an old ssd for the host and rust for all my data. Don’t have m.2 or u.2 in my server but I’ve heard enough horror stories to just use u.2 if the time comes.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        25 minutes ago

        Enterprise systems do have m.2, though admittedly its only really used as pretty disposable boot volumes.

        Though they aren’t used as data volumes so much, it’s not due to unreliability, it’s due to hot swap and power levels.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 hours ago

        That SanDisk is it’s own company now.
        But I don’t k ow if they are still a subsidiary or completely spun of WD.

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    edit-2
    11 hours ago

    Hdds were a fad, I’m waiting for the return of tape drives. 500TB on a $20 cartridge and I can live with the 2 minute seek time.

    • Eldritch@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      ·
      edit-2
      1 day ago

      They can be made any size. Most SATA SSD are just a plastic housing around a board with some chips on it. The right question is when will we have a storage technology with the durability and reliability of spinning magnetized hard drive platters. The nand flash chips used in most SSD and m.2 are much more reliable than they were initially. But for long-term retention Etc. Are still off quite a good bit from traditional hard drives. Hard drives can sit for about 10 years generally before bit rot becomes a major concern. Nand flash is only a year or two iirc.

      • db2@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Longer if it has some kind of small power. I think I read that somewhere.

    • ramble81@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      22 hours ago

      Given that there are already 32TB 2.5” SSDs, what does a 3.5” buy you that you couldn’t get with an adapter?

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        A better price as low density chips are cheaper.
        And you can fit in more of those in a bigger space = Cheaper.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 hours ago

          The lowest density chips are still going to be way smaller than even a E1.S board. The only thing you might be able to be cheaper as you’d maybe need fewer SSD controllers, but a 3.5" would have to be, at best, a stack of SSD boards, probably 3, plugged into some interposer board. Allowing for the interposer, maybe you could come up with maybe 120 square centimeter boards, and E1.L drives are about 120 square centimeters anyway. So if you are obsessed with most NAND chips per unit volume, then E1.L form factor is alreay going to be in theory as capable as a hypothetical 3.5" SSD. If you don’t like the overly long E1.L, then in theory E3.L would be more reasonably short with 85% of the board surface area. Of course, all that said I’ve almost never seen anyone go for anything except E1.S, which is more like M.2 sized.

          So 3.5" would be more expensive, slower (unless you did a new design), and thermally challenged.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          The market for customers that want to buy new disks but do not want to buy new storage/servers with EDSFF is not a particularly attractive market to target.

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          What kind of server? Dell’s caddies have adapters, and I’m pretty sure some have screw holes on the bottom so you don’t need an adapter.

      • synicalx@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        16 hours ago

        A big heat sink like they used to put on WD Raptor drives.

      • earphone843@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        They should be cheaper since theres a bunch more space to work with. You don’t have to make the storage chips as small.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 hours ago

          Chips that can’t fit on a 76mm board do not exist in any market. There’s been some fringe chasing of waferscale for compute, but it’s a nightmare of cost and yield with zero applicable benefits for storage. You can fit more chips on a bigger board with fewer controllers, but a 3.5" form factor wouldn’t have any more usable board surface area than an E1.L design, and not much more than an E3.L. There’s enough height in the thickest 3.5" to combine 3 boards, but that middle board at least would be absolutely starved for airflow, unless you changed specifications around expected airflow for 3.5" devices and made it ventilated.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        1 day ago

        Because we don’t have to pack it in too much. It’d be higher capacities for cheaper for consumers

        Also cooling

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          22 hours ago

          It’s not the packaging that costs money or limits us, it’s the chips themselves. If we crammed a 3.5” form factor full of flash storage, it would be far outside the budgets of mortals.

            • enumerator4829@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              12
              ·
              17 hours ago

              Nope. Larger chips, lower yields in the fab, more expensive. This is why we have chiplets in our CPUs nowadays. Production cost of chips is superlinear to size.

              • earphone843@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                16 hours ago

                Then lower the storage density. Making things as small as possible almost always ends up being more expensive.

                • jj4211@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 hours ago

                  Lower storage density chips would still be tiny, geometry wise.

                  A wafer of chips will have defects, the larger the chip, the bigger portion of the wafer spoiled per defect. Big chips are way more expensive than small chips.

                  No matter what the capacity of the chips, they are still going to be tiny and placed onto circuit boards. The circuit boards can be bigger, but area density is what matters rather than volumetric density. 3.5" is somewhat useful for platters due to width and depth, but particularly height for multiple platters, which isn’t interesting for a single SSD assembly. 3.5 inch would most likely waste all that height. Yes you could stack multiple boards in an assembly, but it would be better to have those boards as separately packaged assemblies anyway (better performance and thermals with no cost increase).

                  So one can point out that a 3.5 inch foot print is decently big board, and maybe get that height efficient by specifying a new 3.5 inch form factor that’s like 6mm thick. Well, you are mostly there with e3.l form factor, but no one even wants those (designed around 2U form factor expectations). E1.l basically ties that 3.5 inch in board geometry, but no one seems to want those either. E1.s seems to just be what everyone will be getting.

                • enumerator4829@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  9
                  ·
                  14 hours ago

                  Not economical. Storage is already done on far larger fab nodes than CPUs and other components. This is a case where higher density actually can be cheaper. ”Mature” nodes are most likely cheaper than the ”ancient” process nodes simply due to age and efficiency. (See also the disaster in the auto industry during covid. Car makers stopped ordering parts made on ancient process nodes, so the nodes were shut down permanently due to cost. After covid, fun times for automakers that had to modernise.)

                  Go compare prices, new NVMe M.2 will most likely be cheaper than SATA 2.5” per TB. The extra plastic shell, extra shipping volume and SATA-controller is that difference. 3.5” would make it even worse. In the datacenter, we are moving towards ”rulers” with 61TB available now, probably 120TB soon. Now, these are expensive, but the cost per TB is actually not that horrible when compared to consumer drives.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        I’m not particularly interested to watch a 40 minute video, so I skinned the transcript a bit.

        As my other comments show, I know there are reasons why 3.5 inch doesn’t make sense in SSD context, but I didn’t see anything in a skim of the transcript that seems relevant to that question. They are mostly talking about storage density rather than why not package bigger (and that industry is packaging bigger, but not anything resembling 3.5", because it doesn’t make sense).