• sudneo@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    16 hours ago

    Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow

    Like which one? Because it’s now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      8
      ·
      15 hours ago

      Computer programming has radically changed. Huge help having llm auto complete and chat built in. IDEs like Cursor and Windsurf.

      I’ve been a developer for 35 years. This is shaking it up as much as the internet did.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        edit-2
        14 hours ago

        I quit my previous job in part because I couldn’t deal with the influx of terrible, unreliable, dangerous, bloated, nonsensical, not even working code that was suddenly pushed into one of the projects I was working on. That project is now completely dead, they froze it on some arbitrary version.
        When junior dev makes a mistake, you can explain it to them and they will not make it again. When they use llm to make a mistake, there is nothing to explain to anyone.
        I compare this shake more to an earthquake than to anything positive you can associate with shaking.

        • InnerScientist@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          13 hours ago

          And so, the problem wasn’t the ai/llm, it was the person who said “looks good” without even looking at the generated code, and then the person who read that pull request and said, again without reading the code, “lgtm”.

          If you have good policies then it doesn’t matter how many bad practice’s are used, it still won’t be merged.

          The only overhead is that you have to read all the requests but if it’s an internal project then telling everyone to read and understand their code shouldn’t be the issue.

        • locuester@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          9 hours ago

          This is a problem with your team/project. It’s not a problem with the technology.

      • sudneo@lemm.ee
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        4
        ·
        15 hours ago

        I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

        I often use LLMs to work on my personal projects and - for example - often Claude or ChatGPT 4o spit out programs that don’t compile, use inexistent functions, are bloated etc. Possibly for languages with more training (like Python) they do better, but I can’t see it as a “radical change” and more like a well configured snippet plugin and auto complete feature.

        LLMs can’t count, can’t analyze novel problems (by definition) and provide innovative solutions…why would they radically change programming?

        • locuester@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          9 hours ago

          You’re missing it. Use Cursor or Windsurf. The autocomplete will help in so many tedious situations. It’s game changing.

        • areyouevenreal@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          9
          ·
          14 hours ago

          ChatGPT 4o isn’t even the most advanced model, yet I have seen it do things you say it can’t. Maybe work on your prompting.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            13 hours ago

            That is my experience, it’s generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It’s not a matter a prompting though, it’s not my prompt that makes it hallucinate functions that don’t exist in libraries or make it write code that doesn’t compile, it’s a feature of the technology itself.

            GPTs are statistical text generators after all, they don’t “understand” the problem.

            • agamemnonymous@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              35 minutes ago

              It’s also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.

              I don’t think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of “understanding”. I think it’s a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can’t get there. Unless you can show me, materially, how human “understanding” functions, we’re just speculating on an immature technology.

      • areyouevenreal@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        14 hours ago

        Exactly this. Things have already changed and are changing as more and more people learn how and where to use these technologies. I have seen even teachers use this stuff who have limited grasp of technology in general.

      • sudneo@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        13 hours ago

        Oh boy…what can possibly go wrong for documents where small minutiae like wording can make a huge difference.

        • figjam@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          12 hours ago

          Creating legal documents, no. Reviewing legal documents for errors and inaccuracies totally.

          • sudneo@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            12 hours ago

            I really can’t see this being done by any sane person. Why would you have a generator of text reviewing stuff (besides grammar)? Do you have any reference of some companies doing this, perhaps?

            • figjam@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              Its complex pattern matching and looking up existing case law online. This work has been outsourced to contracting companies for at least 7 years that I’m aware of. If it is something that can be documented in a run book for non professionals to do for twenty cents on the dollar then there is no reason it can’t be done by a script for .002.