• Uli@sopuli.xyz
      link
      fedilink
      arrow-up
      12
      ·
      1 month ago

      I remember being thrilled to move from floppies to a 16mb flash drive for my school assignments, even if I did have to constantly download and reinstall the USB Mass Storage drivers for the Windows 1998 sp2 computers in the library which reset every night. And the transfer speed was SLOW.

      The fact that you can get a terabyte flash drive now, which can hold 62,500 of my school assignment drives, is mind blowing to me.

      • MDCCCLV
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        I always wanted the zip drives with 250mb capacity.

        • MonkeMischief@lemmy.today
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 month ago

          Those were pretty cool. My dad had a single one in a hard plastic case, I want to say it was like 100 MB or something? I loved how chunky and solid it was.

          I do feel like it’d be cool to have a storage medium that at least feels like that again. Like sliding a big hot-swappable SATA SSD into a slot and getting a satisfying “kaCHUNK” and a little busy light.

          • vaultdweller013@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            1 month ago

            At the very least that sounds like a good use for the front slots in a modern computer case, as you said allow hot swapping and it’d be a pretty good system for games in particular.

  • Smoolak@lemmy.world
    link
    fedilink
    arrow-up
    21
    arrow-down
    4
    ·
    1 month ago

    The meme don’t make sense. An SRAM cache of that size would be so slow that you would most likely save clock cycles reading directly from RAM an not having a cache at all…

    • cogman@lemmy.world
      link
      fedilink
      arrow-up
      30
      ·
      1 month ago

      Slow? Not necessarily.

      The main issue with that much memory is the data routing and the physical locality of the memory. Assuming you (somehow) could shrink down the distance from the cache to the registers and could have a wide enough data line/request lines you can have data from such a cache in ~4 cycles (assuming L1 and a hit).

      What slows down memory for L2 is the wider address space and slower residence checks. L3 gets a bit slower because of even wider address spaces but also it has to deal with concurrency issues since it’s shared among cores. It also ends up being slower because it physically has to be further away from the cores due to it’s size.

      If you ever look at a CPU die, you’ll see that L1 caches are generally tiny and embedded right into the center of the processor. L2 tends to be bolted onto the sides of the physical cores. And L3 tends to be the largest amount of silicon real estate on a CPU package. This is all what contributes to the increasing fetch performance for each layer along with the fact that you have to check the closest layers first (An L3 hit, for example, means that the CPU checked L1 and L2 and failed at both which takes time. So L3 access will always be at least the L1 + L2 times).

      • Smoolak@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 month ago

        I agree. When evaluating cache access latency, it is important to consider the entire read path rather than just the intrinsic access time of a single SRAM cell. Much of the latency arises from all the supporting operations required for a functioning cache, such as tag lookups, address decoding, and bitline traversal. As you pointed out, implementing an 8 GB SRAM cache on-die using current manufacturing technology would be extremely impractical. The physical size would lead to substantial wire delays and increased complexity in the indexing and associativity circuits. As a result, the access latency of such a large on-chip cache could actually exceed that of off-chip DRAM, which would defeat the main purpose of having on-die caches in the first place.

  • Johanno@feddit.org
    link
    fedilink
    arrow-up
    14
    ·
    1 month ago

    I always thought it would be funny running an os from an usb stick.

    Never would I have thought that there would be storage in the size of a stick exceeding the default configuration of a desktop pc.

    2 TB in one small nvme drive?! Wtf. Amazing but also crazy.

    • epicstove
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      When my dad first saw an nvme drive he had to triple check what he was looking at BC in his old 70s computer brain there’s no fucking way something so small and unmoving can hold so much data, read/write it so fast, and all for a relatively cheap price.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 month ago

      Something I was able to do with my old OnePlus 3 phone, was use it as a Linux USB. It was a pretty neat trick!

      It was really convenient to just snag a work laptop and boot it into Puppy Linux (which lives entirely in RAM) to browse around and such without my job looking too closely and being creepy about it.

      Disclaimer

      IT departments are various kinds of chill, scrutinizing, lazy, or pathologically psycho, YMMV greatly. Try at your own risk. Lol