As a brand new user of ChatGPT, I have never been so incredibly impressed and rage-inducing frustrated at exactly the same time with any new tech I’ve ever tried.

I was using it to help create some simple javascript functions and debug some code. It could come up with working functions almost immediately that took a really interesting approach that I wouldn’t have thought of. “Boom,” I thought, “this is great! Let’s keep going!” Then, immediately afterwards, it would provide absolute shit that couldn’t and wouldn’t work at all. It couldn’t remember the very code it just outputted to me on multiple occasions, and when asked to make a few minor changes it constantly spouted brand new very different functions, usually omitting half the functionality it had before. But, when the code was directly typed in by me in a message, every time, it did much better.

Seems with every question like that I had to start from scratch every time, or else it would work from clearly wrong (not even close, usually) newly generated, code. For example, if I asked it to print exactly the same function it printed a moment ago, it would excitedly proclaim, “Of course! Here’s the exact same function!” and then print a completely different function.

I spent so much time carefully wording my question to get it to correctly help me debug something that I ended up finding the bug myself, just because I was being so careful in examining my code so I could ask it a question that would give me a relevant answer. So…I guess that’s a win? Lol. Then, just for fun, I told ChatGPT that I had found and corrected the bug, and it took responsibility for the fix.

And yet, when it does get it right, it’s really quite impressive.

  • @[email protected]
    link
    fedilink
    229 months ago

    You should read a bit more on how LLMs work, as it really helps to know what the limitations of the tech are. But yeah, it’s good when it’s good but a lot of the time it is inconsistent. It is also confident but sometimes just confidently wrong, something that people have taken to call “hallucinations”. Overall it is a great tool if you can easily check it and are just using it to write up your own code writing, but pretty bad at actually generating fully complete code.

    • soft_frog
      link
      fedilink
      79 months ago

      One thing I’ve found is you have to be careful of the context getting polluted with wrong output. If you have one thing wrong, the probability of it using that wrong info is much higher than baseline wrongness.

      In practice that means if it starts spitting out bad code, try a new conversation to refresh things. I find that faster than debugging because it all often return to a buggy state later.

    • ZambonimanOP
      link
      19 months ago

      Yes. Even when I know what the limits are, and why, the thing lulls you into responding as if it were a conscious agent. The downside of the way it produces speech.

  • CynAq
    link
    fedilink
    149 months ago

    LLMs are “generate something that sounds like it would answer the prompt” machines. Nothing more and nothing less.

    Through that lens, they are a lot less impressive, a lot less frustrating and also a lot more fun.

        • Saganastic
          link
          fedilink
          4
          edit-2
          9 months ago

          Humans also generate something that sounds like it would answer the prompt. If I ask you “What country is Machu Picchu in?”, you’ll ponder for a moment, and give me what you think the answer to the prompt is. You might answer Peru, or you might answer with something else that seems reasonable to you, like Argentina.

          Humans answer questions incorrectly all the time. And they also try to come up with a reasonable response to prompts when questioned.

          • CynAq
            link
            fedilink
            -19 months ago

            Humans can do something, doesn’t mean humans only do that thing and nothing else.

            Humans have many models of the world running in different modes in parallel, enabling us to make sense of things other than just process language and come up with plausible sounding answers within the rules of a given language.

            Our understanding of concepts is different than how we process language, demonstrated in that there are perfectly intelligent people who can’t communicate using spoken or written language (including sign language) but can do so using other methods which demonstrate language processing isn’t essential to our intelligence.

            The way we learn information and integrate it into our neural network is vastly different than how we train our artificial models using machine learning. Even if we just take language processing, we definitely don’t learn by reading the entirety of written human language many times over regardless of what language it’s written in, until we can understand how it’s underlying mechanics work so that we can form plausible structures of word-chunk strings without necessarily understanding the concepts behind the word-chunks.

            • Saganastic
              link
              fedilink
              19 months ago

              I agree, there’s more going on in a human brain. But fundamentally both humans and LLMs use neural networks. The design of the neural network in a LLM is much simpler than the neural network in a human.

              But they both “think” to come up with an answer. They both cross reference learned information. They both are able to come up with an answer that is statically likely to be correct based on their learned information.

              There’s a ton of potential to take the neural networks in LLMs beyond just language. To have then conceptualize abstract ideas the way a human would. To add specialized subsections to the model for math and logic. I think we’re going to see a ton of development in this area.

              And I think you’re right, they’re not exactly the same as humans. But fundamentally there is a lot of similarity. At the end of the day, they are modeled after human brains.

  • @[email protected]
    link
    fedilink
    99 months ago

    It usually isn’t much good at writing new code from scratch. You have to be so specific on what you want that by the time you fully described the code you need, you could have written it yourself.

    What it’s really good at is refactoring or finding bugs in existing code. I will frequently paste in some ugly function that I’ve written and say “can you make this more readable?” and 100% of the time it produces clean, readable code that’s nicer than what I gave it.

    • loaf
      link
      fedilink
      English
      39 months ago

      I do this as well. I’ll ask it to check for potential issues, and say, “can you make this more concise?” I’ve actually learned a little by how it will shorten my code.

  • loaf
    link
    fedilink
    English
    59 months ago

    Had similar experiences with Python. I started requesting simple functions I could create on my own, and it worked fine. Compounding them even worked to a degree.

    However, it eventually just… failed. Horribly.

    What I’ve learned that works most of the time is copying the entirety of the code (yuck), and telling it to tweak it a bit. That seems to work more often than not.

    • ZambonimanOP
      link
      39 months ago

      Yup, exactly what I experienced too.

  • @[email protected]
    link
    fedilink
    19 months ago

    In my experience writing functions is easy and using LLMs for it is a waist of time. I would spend more time adapting the output to my code and making sure it works fine. What I could really use help with is figuring out how to use some more advanced features of some tools/libraries and so far the tools I’ve tried fail at this completely.

  • Rikudou_Sage
    link
    fedilink
    English
    19 months ago

    (Maybe) interesting fact: You can trigger ChatGPT here in the comments on Lemmy! Example in a comment under this one.