TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

  • Vash63@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    Only DX11 and 12? Shame they don’t support open APIs like Vulkan if true.

    • Ranvier@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      They’ve also stated fsr3 will continue to be open source, and previous versions have been compatible with Vulkan on the developer end at least. I can’t find though if this new hyper rx application running it agnostic to any developer integration is supporting Vulkan though. Guess we’ll find out when it’s released shortly here.

    • azvasKvklenko@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      It can probably be integrated into anything like FSR 1 and 2. Valve can just update their Gamescope compositor to use it instead of FSR 1. I wonder though, how the image quality is going to be like when upscaling/generating frames based on such small input image resolutions. Previous versions of FSR really only mase sense for around-1080p upwards.

  • Ranvier@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    I hope this works out and becomes a viable competitor to DLSS3, especially with this most recent generation of games getting so demanding spec wise. I also appreciate that they make it available for any graphics card from any company. Nvidia certainly has some edge in propiatary features that AMD is having trouble matching at the moment, but Nvidia becoming even more dominant is bad news. Lack of competition will only encourage them to stagnate in the future and increase prices even higher. I’ll probably be looking to upgrade my own gpu soon so am very interested in how the just announced amd 7800xt compares against the Nvidia 4070.

  • elderflower@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    22
    ·
    1 year ago

    Hallucinated frames like DLSS3. Completely unnecessary, just like the hallucinated pixels of DLSS2/FSR2. Dialling a couple of settings down to medium looks much better.

    • candyman337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      1 year ago

      We are reaching the limits of render technology with our current architectures. You’ll find that most established practices for computer hardware/software/firmware started as a “cheat” or weird innovation that began with using something in an ass backwards way. Reducing the amount of data a GPU needs to render is a good way to get more out of old and new hardware. It’s not perfected yet but the future of these features is very promising.

      • PenguinTD
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        good thing the rendering engineers are willing to try different ways instead of stuck at this “real pixel” shit that some youtuber started. Even freaking Pixar that is grand daddy of CG tech also doing ML global illumination and temporal denoiser. some of our current gen realtime graphics literally took hours to render 10 years ago, hardware aren’t improving that fast, it’s the new algorithms and render method make it possible.

      • meat_popsicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        edit-2
        1 year ago

        An upscaler cannot provide higher fidelity than than native providing all settings (other than resolution) are constant.

        • NewNewAccount@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          Sounds impossible but it’s not. Check out some of Digital Foundry’s on DLSS 2; more detail was pulled out of the lower base resolution than even native.

        • xep@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          The upscaler is trained on higher resolution data and so it can more accurately depict subpixel and temporal information which is lost at native. DLSS can produce more detail than native in those cases.

    • lustyargonian@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I don’t think the term applies here. Hallucination, when it comes to AI models, is when they make up random data with no basis. This, on other hand, is interpolation. It compares two frames and predicts the intermediate frame using motion vectors. And FSR3 isn’t even using machine learning, it’s a bespoke algorithm that they have written.

      Approximation would be a fitting term here, just like many things in rendering technology are.

      • elderflower@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        AI models are universal approximators f such that y=f(x,w) with optimizable weights w that minimize some metric L(y). You can come up with a hand tuned approximator yourself that matches/beats an AI model. Does not change the fact that any approximator attempts to guess (i.e. “hallucinate”) the output y based on the prior x.