Four weeks ago, GPT-4 remained the undisputed champion: consistently at the top of every key benchmark, but more importantly the clear winner in terms of “vibes”. Almost everyone investing serious time exploring LLMs agreed that it was the most capable default model for the majority of tasks—and had been for more than a year.

Today that barrier has finally been smashed. We have four new models, all released to the public in the last four weeks, that are benchmarking near or even above GPT-4. And the all-important vibes are good, too!

Those models come from four different vendors.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      10 months ago

      There are many free LLM models and platforms to access them. You can download and permanently posses the actual model files and weights.

      There are open source frameworks to run and interact with these models that run fully locally

      • TheAnonymouseJoker@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        The problem is open models are nowhere close to stuff like GPT-4. It is going to be a problem for us non-elites.

        • Gabu@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          10 months ago

          The problem is open models are nowhere close to stuff like GPT-4

          Of course not, you’d need the same class of hardware running 24/7 to get similar results, and ain’t nobody paying for that.

        • slacktoid@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Agreed but it’s still a good tool thats available. You can use it to summarize large documents. Yes prolly never as capable if you have elite monies. But still worth playing and learning how to use. Imho.

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          10 months ago

          I’ll acknowledge that right now to get model conclusions on par with gpt 4 you are going to need a custom pipeline with multiple adversarial models, RAG and more. But it all could be built by an eager hobbyist with a strong gaming pc

          To be clear this approach will not benchmark the same as gpt 4 but can indeed generate useful content.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      Mistral has a lot of open source models that are quite good. Their largest ones are closed; for what reason I don’t know.

      • TheAnonymouseJoker@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        10 months ago

        Then that also does not matter. It is the simple consequence of “open” advocates not donating or being financially involved with AI LLM model building computer hardware or datasets. Money capital happens to be the most important factor in society as of now.

    • agent_flounder@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 months ago

      Yup

      Firstly, none of those models are openly licensed or weights available. I imagine the resources they need to run would make them impractical for most people, but at after a year that has seen enormous leaps forward in the openly licensed model category it’s sad to see the very best models remain strictly proprietary.