• kata1yst@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    Meh. Physx emulation on CPU has been outstripping hardware implementations for a while as far as I know.

    Nvidia dropping a portfolio item to open source appears to only happen once they’ve milked it to death first.

      • kata1yst@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I mean, does it work worse? UE4/Havok and Unigine all use CPU Physx. And every other engine I know of uses a custom particle physics implementation and seem far better at it than GPU Physx ever was.

        On GPU I remember physx being super buggy since the GPU calculations were very low precision, and that was if you had an Nvidia card. It made AMD cards borderline unplayable in many games that were doing extensive particle physics for no other reason than to punish AMD in benchmarks.

        • Flipper@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          For AMD it was executed on one core of the CPU. So the problems you’re talking of, with AMD cards is exactly what I mean.

          • kata1yst@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Not trying to be rude, but that’s a question of how the engine uses the CPU vs GPU implementation, not a measure of apples to apples.

            Comparing modern games with CPU particle physics to the heyday of GPU Physx there is no comparison. CPU physics (and Physx) are more accurate, less buggy, and generally not impactful in performance.