• just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    6 days ago

    Eh. LPCAMM seems more useful overall as a product. Faster DDR at this point in time has diminishing returns.

    It’ll be interesting to see how this plays out though, because there are a few different paths to solve this type of problem with DDR5. Personally, I’d love for much lower power, but a wider bus, which is where I thought things were heading.

    • Vinny_93@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      Well we’ve seen CAS latency increase almost quicker than DDR speeds. CAMM should address this issue by shortening the distance from cpu to RAM, at least for laptops.

      I’d say DIMM has pretty much stranded in DDR5.

    • PrivateNoob@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 days ago

      Well usually yes, but if cpu manufacturers decide to really lean into cramming lots of cores into cpu-s (Like Intel’s big.LITTLE cpus, but even more cores), then we probably will need faster RAM-s, since more core == more memory bandwith demand, and currently this issue has been always resolved by faster RAMs. (Or we could just increase the memory channels)

    • Dudewitbow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      6 days ago

      faster ram generally has dimishing returns on sustem use, however it does matter for gpu compute reasons on igpu (e. g gaming, and ML/AI would make use of the increased memory bandwith).

      its not easily to simply just push a wider bus because memory bus size more or less affects design complexity, thus cost. its cheaper to push memory clocks than design a die with a wider bus.

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Computational-Fluid-Dynamics simulations are RAM-limited, iirc.

        I’m presuming many AI models are, too, since some of them require stupendous amounts of RAM, which no non-server machine would have.

        “diminishing returns” is what Intel’s “beloved” Celeron garbage was pushing.

        When I ran Memtest86+ ( or the other version, don’t remember ), & saw how insanely slow RAM was, compared with L2 or L3 cache, & then discovered how incredible the machine-upgrade going from SATA to NVMe was…

        Get the fastest NVMe & RAM you can: it puts your CPU where it should have been, all along, and that difference between a “normal” build vs an effective build is the misframing the whole industry has been establishing, for decades.

        _ /\ _

    • itsmect@monero.town
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      LPCAMM may have better specs, but DIMM requires a smaller area on the PCB and can make better use of the vertical space.

        • itsmect@monero.town
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          LPCAMM seems more useful overall as a product.

          Only if you need 2-4 sticks, otherwise they take up too much PCB space. Look at servers and how a good chunk of their volume is filled with dozens of sticks. You cant simply lay them down flat.