• AustralianSimon@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    I work in the field for a company with 40k staff and over 6 million customers.

    We have about 100 dedicated data science professionals and we have 1 LLM we use for our chatbots vs a few hundred ML models running.

    LLMs are overhyped and not delivering as much as people claim, most businesses doing LLM will not exist in 2-5 years because Amazon, Google and Microsoft will offer it all cheaper or free.

    They are great at generating content but honestly most content is crap because it’s AI rejuvenating something it’s been trained on. They are our next gen spam for the most part.

    • CeeBee@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      7 months ago

      LLMs are overhyped and not delivering as much as people claim

      I absolutely agree it’s overhyped, but that doesn’t mean useless. And these systems are getting better everyday. And the money isn’t going to be in these massive models. It’s going to be in smaller domain specific models. MoE models show better results over models that are 10x larger. It’s still 100% early days.

      most businesses doing LLM will not exist in 2-5 years because Amazon, Google and Microsoft will offer it all cheaper or free.

      I somewhat agree with this, but since the LLM hype train started just over a year ago, smaller open source fine-tuned models have been keeping ahead of the big players that are too big to shift quickly. Google even mentioned in an internal memo that the open source community had accomplished in a few months what they thought was literally impossible and could never happen (to prune and quantize models and fine-tune them to get results very close to larger models).

      And there are always more companies that spring up around a new tech than the number that continue to exist after a few years. That’s been the case for decades now.

      They are great at generating content but honestly most content is crap because it’s AI rejuvenating something it’s been trained on.

      Well, this is actually demonstrably false. There are many thorough examples of how LLMs can generate novel data, even papers written on the subject. But beyond generating new and novel data, the use for LLMs are more than that. They are able to discern patterns, perform analysis, summarize data, problem solve, etc. All of which have various applications.

      But ultimately, how is “regurgitating something it’s been trained on” any different from how we learn? The reality is that we ourselves can only generate things based on things we’ve learned. The difference is that we learn basically about everything. And we have a constant stream of input from all our senses as well as ideas/thoughts shared with other people.

      Edit: a great example of how we can’t “generate” something outside of what we’ve learned is that we are 100% incapable of visualizing a 4 dimensional object. And I mean visualize in your mind’s eye like you can with any other kind of shape or object. You can close your eyes right now and see a cube or sphere, but you are incapable of visualizing a hyper-cube or a hyper-sphere, even though we can describe them mathematically and even render them with software by projecting them onto a 3D virtual environment (like how a photo is a 2D representation of a 3D environment).

      /End-Edit

      It’s not an exaggeration that neural networks are trained the same way biologic neural networks (aka brains) are trained. But there’s obviously a huge difference in the inner workings.

      They are our next gen spam for the most part.

      Maybe the last gen models, definitely not the current gen SOTA models, and the models coming in the next few years will only get better. 10 years from now is going to look wild.

        • CeeBee@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          I also worked in the field for a decade up until recently. And I use LLMs for a few things professionally, particularly code generation. It can’t write “good and clean” code, but what it does do is help get the ball rolling writing boilerplate stuff and helps solve issues that aren’t immediately clear.

          I actually run a number of models locally also.

          I get you are excited about the tech

          What a condescending thing to say. It has nothing to do with being excited or not. The broader issue is that people are approaching the topic from a “it’ll replace programmers/writers/accountants/lawyers, etc” standpoint. And I bet that’s what all the suits at various companies expect.

          Whereas the true usefulness in LLMs are as a supplementary tool to help existing jobs be more efficient. It’s no different than spell check, autocomplete, code linting, and so on. It’s just more capable than those tools.

          for now it is mostly novel and creating junk.

          This statement proves my point. Everyone thinks LLMs will “do the job” when they’re just a tool to HELP with doing the job.

            • CeeBee@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              7 months ago

              Said by someone who’s never written a line of code.

              Is autocorrect always right? No, but we all still use it.

              And I never said “poorly generated”, I decidedly used “good and clean”. And that was in the context of writing larger segments of code on it’s own. I did clarify after that it’s good for writing things like boilerplate code. So no, I never said “poorly generated boilerplate”. You were just putting words in my mouth.

              Boilerplate code that’s workable can help you get well ahead of a task than if you did it yourself. The beauty about boilerplate stuff is that there’s not really a whole lot of different ways to do it. Sure there are fancier ways, but generally anything but code that’s easy to read is frowned upon. Fortunately LLMs are actually great at the boilerplate stuff.

              Just about every programmer that’s tried GitHub Copilot agrees that it’s not taking over progressing jobs anytime soon, but does a fine job as a coding assistant tool.

              I know of at least three separate coding/tech related podcasts with multiple hosts that have come to the same conclusion in the past 6 months or so.

              If you’re interested, the ones I’m thinking of are Coder Radio, Linux After Dark, Linux Downtime, and 2.5 Admins.

              Your reply also demonstrates the ridiculous mindset that people have about this stuff. There’s this mentality that if it’s not literally a self aware AI then it’s spam and worthless. Ya, it does a fairly basic and mundane thing in the real world. But that mundane thing has measurable utility that makes certain workloads easier or more efficient.

              Sorry it didn’t blow your mind.