• Poutinetown
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    3 months ago

    A phone CPU challenging a top of the line desktop GPU is crazy.

      • azuth@sh.itjust.works
        link
        fedilink
        arrow-up
        11
        ·
        3 months ago

        It doesn’t really challenge the desktop CPU in multithreaded tests where the 170w are actually relevant.

        The test also includes AI tasks, the Apple chip seems to spend around 20% of real estate on that, the desktop CPU had none.

      • notthebees@reddthat.com
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        That’s actually nuts. I have an iphone x, I remember when that came out and everyone was surprised that it was as fast as an i5-7200u. Yeah sure it’s a dual core laptop chip but still very impressive.

      • Ugurcan@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        5
        ·
        edit-2
        3 months ago

        I have to demonstrate to my friends every time how my MBP M2 blows my Ryzen 5950x desktop out of the water for my professional line of work.

        I can’t catch quite the drift what x86/x64 chips are good for anymore, other than gaming, nostalgia and spec boasting.

        • WolfLink@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          3 months ago

          I have a 5950X computer and a Mac mini with some form of M2.

          I render video on the M2 computer because I have that sweet indefinite Final Cut Pro license, but then I copy it to the 5950X computer and use ffmpeg to recompress it, which is like an order of magnitude faster than using the M2 computer to do the video compression.

          I have some other tasks I’ve given both computers and when the 5950X actually gets to use all its cores, it blows the M2 out of the water.

          • Ugurcan@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            3 months ago

            Is it possible you’re using your desktop’s GPU for ffmpeg encoding, and not the CPU, by chance?

            • WolfLink@sh.itjust.works
              link
              fedilink
              arrow-up
              3
              ·
              3 months ago

              No, you need to manually specify that, and the options are more limited, so I usually do CPU encoding unless I’m prioritizing encoding speed over quality for some reason. (And yes, I have verified it’s using the CPU by looking at the CPU usage while it’s encoding).

        • psvrh
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          I can’t catch quite the drift what x86/x64 chips are good for anymore, other than gaming, nostalgia and spec boasting.

          Probably two things:

          • Cost- and power-no-object performance, which isn’t necessarily a positive as it encourages bad behaviour.
          • The platform is much more open, courtesy of some quirks of how IBM spec’ed BIOS back before the dawn of time. Yes, you can get ARM and RISC-V licenses (openPOWER is kind of a non-entity these days) and design your own SBC, but every single ARM and RISC-V machine boots differently, while x86 and amd64 have a standard boot process.

          All those fancy “CoPilot ready” Qualcomm machines? They’re following the same path as ARM-based smartphones have, where every single machine is bespoke and you’re looking for specific boot images on whatever the equivalent of xda-developers is, or (and this is more likely) just scrapping them when they’re used up, which will probably happen a lot faster, given Qualcomm’s history with support.

          I’d love to see a replacement for x86/amd64 that isn’t a power suck, but has an open interface to BIOS.