AMD’s new CPU hits 132fps in Fortnite without a graphics card::Also get 49fps in BG3, 119fps in CS2, and 41fps in Cyberpunk 2077 using the new AMD Ryzen 8700G, all without the need for an extra CPU cooler.

  • inclementimmigrant@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    22
    ·
    5 months ago

    Mind you that it can get these frame rates at the low setting. While this is pretty damn impressive for a APU, it’s still a very niche market type of APU at this point and I don’t see this getting all that much traction myself.

    • BorgDrone@lemmy.one
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      5 months ago

      I think the opposite is true. Discrete graphics cards are on the way out, SoCs are the future. There are just too many disadvantages to having a discrete GPU and CPU each with it’s own RAM. We’ll see SoCs catch up and eventually overtake PCs with discrete components. Especially with the growth of AI applications.

        • BorgDrone@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          They may build dedicated PCs for training, but those models will be used everywhere. All computers will need to have hardware capable of fast inference on large models.

      • Corgana@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I agree, especially with the prices of graphics card being what they are. The 8700G can also fit in a significantly smaller case.

        • BorgDrone@lemmy.one
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Unified memory is also huge for performance of AI tasks. Especially with more specialized accelerators being integrated into SoCs. CPU, GPU, Neural Engine, Video encoder/decoders, they can all access the same RAM with zero overhead. You can decode a video, have the GPU preprocess the image, then feed it to the neural engine for whatever kind of ML task, not limited by the low bandwidth of the PCIe bus or any latency due to copying data back and forth.

          My predictions: Nvidia is going to focus more and more on the high-end AI market with dedicated AI hardware while losing interest in the consumer market. AMD already has APUs, they will do the next logical step and move towards full SoCs. Apple is already in that market, and seems to be getting serious about their GPUs, I expect big improvement there in the coming years. No clue what Intel is up to though.