• IHeartBadCode@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    3 months ago

    Thermal is a wall to contend with as well. At the moment SSDs get the density from 3D stacking the planes of substrate that make up the memory cells. Each layer contributes some heat and at some point the layer in the middle gets too hot from the layers below and not being close enough to the top to dissipate the heat upwards fast enough.

    One way to address this was the multi-level cell (MLC) where instead of on/off, the voltage within the cell could represent multiple bits. So 0-1.5v = 00, 1.6-3v = 01, 3.1-4.5v = 10, 4.6-5v = 11. But that requires sense amplifiers that can handle that, which aren’t difficult outright to etch, they just add complexity to ensure that the amplifier read the correct value. We’ve since moved to eight-level cells, where each cell holds an entire byte, and the error correction circuits are wild for the sense amplifiers. But all NAND FGMOS leak, so if you pack eight levels into a single cell, even small leaks can be the difference between sensing one level from another level. So at some point packing more levels into the cell will just lead to a cell that leaks too quickly for the word “storage” to be applied to the device. It’s not really storage any longer if powering the device off for half a year puts all the data at risk.

    So once going upwards and packing hits a wall, the next direction is moving out. But the more you move outward, the further one is placing the physical memory cells from the controller. It’s a non-zero amount of distance and the speed of light is only so fast. One light-nanosecond is about 300 millimetres, so a device operating at 1GHz frequency clock has that distance to cover in a single tick of the clock in an ideal situation, which heat, quantum effects, and so on all conspire to make it less than ideal. So you can only go so far out before you begin to require cache in the in-between steps and scheduling of block access that make the entire thing more complex and potentially slow it down.

    And there are ways to get around that as well, but all of them begin to really increase the cost, like having multi-port chips that are accessed on multi-channel buses, basically creating a small network inside your SSD of chips. Sort of how like a lot of CPUs are starting to swap over to chiplet designs. We can absolutely keep going, but there’s going to be cost associated with that “keep going” that’s going to be hard to bring down. So there will be a point where that “cost to utility” equation for end-users will start playing a much larger role long before we hit some physical wall.

    That said, the 200 domain of layers was thought to be the wall for stacking due to heat, there was some creative work done and the number of layers got past 300, but the chips do indeed generate a lot more heat these days. And maybe heat sinks and fans for your SSD aren’t too far off in the future, I know passive cooling with a heat sink is already becoming vogue with SSDs. The article indicated that Samsung and SK hynix predict being able to hit 1000+ layers, which that’s crazy to think about, because even with the tricks being employed today to help get heat out of the middle layers faster, I don’t see how we use those same tricks to hit past 500+ layers without a major change in production of the cells, which usually there’s a lot of R&D that goes behind such a thing. So maybe they’ve been working on something nobody else knows about, or maybe they’re going to have active cooling for SSDs? Who knows, but 1000+ layers is wild to think about, but I’m pretty sure that such chips are not going to come down in prices as quickly as some consumers might hope. As it gets more complex, that length of time before prices start to go down starts to increase. And that slows overall demand for more density as only the ones who see the higher cost being worth their specific need gets more limited to very niche applications.