Four weeks ago, GPT-4 remained the undisputed champion: consistently at the top of every key benchmark, but more importantly the clear winner in terms of “vibes”. Almost everyone investing serious time exploring LLMs agreed that it was the most capable default model for the majority of tasks—and had been for more than a year.

Today that barrier has finally been smashed. We have four new models, all released to the public in the last four weeks, that are benchmarking near or even above GPT-4. And the all-important vibes are good, too!

Those models come from four different vendors.

  • TheAnonymouseJoker@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    The problem is open models are nowhere close to stuff like GPT-4. It is going to be a problem for us non-elites.

    • Gabu@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      The problem is open models are nowhere close to stuff like GPT-4

      Of course not, you’d need the same class of hardware running 24/7 to get similar results, and ain’t nobody paying for that.

    • slacktoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      Agreed but it’s still a good tool thats available. You can use it to summarize large documents. Yes prolly never as capable if you have elite monies. But still worth playing and learning how to use. Imho.

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      8 months ago

      I’ll acknowledge that right now to get model conclusions on par with gpt 4 you are going to need a custom pipeline with multiple adversarial models, RAG and more. But it all could be built by an eager hobbyist with a strong gaming pc

      To be clear this approach will not benchmark the same as gpt 4 but can indeed generate useful content.