• AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    arrow-up
    95
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Double and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).

    Without them, if you want to do some number crunching on your GPU and have your data on the host (“CPU”) memory, then you’d basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There’s one big issue here: during the memory transfer, your GPU is idle because you’re waiting for the copy to finish, so you’re wasting precious GPU compute.

    So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let’s call them buffer_0 and buffer_1. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it to buffer_0, then run your GPU code asynchronously on that buffer. While it’s running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously to buffer_1. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time on buffer_1, again asynchronously. Then you copy, asynchronously again, another chunk from the host to buffer_0 this time and you keep swapping the buffers like this for the rest of your loop.

    Now some GPU programmers don’t want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.

    So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it’s exactly the same principle.

      • Chewy@discuss.tchncs.deOP
        link
        fedilink
        arrow-up
        20
        ·
        1 year ago

        If the system can’t keep up with the animation of e.g. Gnome’s overview, the fps halfes because of double buffered vsync for a moment. This is perceived as stutter.

        With triple buffer vsync the fps only drop a little (e .g 60 fps -> 55 fps), which isn’t as big of drop of fps, so the stutter isn’t as big (if it’s even noticeable).

        • MonkderZweite@feddit.ch
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          11 months ago

          Maybe the animation a bit simpler…?

          Less animation is usually better UX in something often used, if it’s not to hide slowness of someting else.

          • jmcs@discuss.tchncs.de
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            11 months ago

            If by animations you mean smoothly moving the mouse and windows while badly optimized apps and websites are rendering, yes.

          • Moltz@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Lol, why own up to adding animations the system can’t handle when you can blame app and web devs? Gnome users always know where the blame should be laid, and it’s never Gnome.

      • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        1 year ago

        Biased opinion here as I haven’t used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.

        • Moltz@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          11 months ago

          Not only slow, it drops frames constantly. Doesn’t matter how good your hardware is.

          There’s always the Android route, why fix the animations when you can just add high framerate screens to all the hardware to hide the jank. Ah, who am I kidding, Gnome wouldn’t know how to properly support high framerates across multiple monitors either. How many years did fractional scaling take?