• foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      AI trains on available content.

      The new content since AI contai s a lot of AI - created content.

      It’s learning from its own lack of fully understood reality, which is degrading its own understanding of reality.

        • foggy@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          There is no gpt5, and gpt4 gets constant updates, so it’s a bit of a misnomer at this point in its lifespan.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          It’s possible to apply a layer of fine-tuning “on top” of the base pretrained model. I’m sure OpenAI has been doing that a lot, and including ever more “don’t run through puddles and splash pedestrians” restrictions that are making it harder and harder for the model to think.

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      They don’t want it to say dumb things, so they train it to say “I’m sorry, I cannot do that” to different prompts. This has been known to degrade the quality of the model for quite some time, so this is probably the likely reason.