• ocassionallyaduck@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 hours ago

    Frame reprojection lacks motion data. It is in the title. It is reprojecting the last frame. Frame generation uses the interval between real frames, feeds in vector data, and estimates movement.

    If I am trying to follow a ball going across the screen, not moving my mouse, reprojection is flat out worse. Because it is reprojecting the last frame, where nothing moved. Frame 1, Frame 1RP , then Frame 2. 1 and 1RP would have the ball in the exact same place. If I move my viewpoint, then the perspective will feel correct, viewport edges will blur and the reprojection will map to perspective which feels better for head tracking in VR. But for information delivery it is no new data, not even a guess. It’s still the same frame, just in a different point in space. Not till the next real frame comes in.

    With frame generation, if I am watching this ball again, now it looks more like Frame 1 (Real), Frame 1G (estimate), Frame 2 (real) Now frame 1 and frame 1G have different data, and 1G is built on vector data between frames. Not 100% but it’s a educated guess where the ball is going between frame 1 and frame 2. If I move my viewpoint, it is not as responsive feeling as reprojection, but it the gained fake middle frame helps with motion tracking in action.

    The real answer is to use frame generation with low-latency configurations, and also enable reprojection in the game engine if possible. Then you have the best of both worlds. For VR, the headset is the viewport, so it’s handled at a driver level. But for games, the viewport is a detached virtual camera, so the gamedev has to expose this and setup reprojection, or Nvidia and AMD need to build some kind of DLSS/FSR like hook for devs to utilize.

    But if you could do both at once, that would be very cool. You would get the most responsive feel in terms of lag between input and action on screen, while also getting motion updates faster than a full render pass. So yes, Intel’s solution is a set in that direction. But ASW is not in itself a solution, especially for high motion scenes with lost of graphics. There is a reason the demo engine in the LTT video was extremely basic. If you overloaded that with particle effects and heavy rendering like you see in high end titles, then the smearing from reprojection would look awful without rules and bounding on it.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 minutes ago

      The reprojected frame with the ball in the same spot is still more up to date than a generated frame using interpolation.

      With reprojection, every other frame is showing where the ball actually is.

      It essential displays the game-world at the framerate it is actually being generated, with as little latency as possible.

      I vastly prefer this. Together with the reduced perceived input latency, this makes motion tracking FAR easier than with frame generation.

      With current frame generation, every frame, is showing where the ball was, two or three frames ago. You never see where it is right now. Due to this, in fast paced action, hand-eye coordination is slower, more likely to overshoot, etc.

      And further developed reprojection, absolutely could account for such things.