In my case, there are 95 packages that depend on zlib, so removing it is absolutely the last thing you want to do. Fortunately though, GPT also suggested refreshing the gpg keys, which did solve the update problem I was having.

You gotta be careful with that psycho!

  • moreeni@lemm.ee
    link
    fedilink
    arrow-up
    57
    ·
    11 months ago

    Not copy pasting random commands you are not 100% sure about is basic terminal literacy

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      16
      arrow-down
      3
      ·
      11 months ago

      Online forums can give bad advice, but this is just next level bad. GPT truly has no remorse.

      • infeeeee@lemm.ee
        link
        fedilink
        arrow-up
        44
        arrow-down
        2
        ·
        11 months ago

        It’s a language model, I still don’t understand why people expect it to always give correct answers. You asked for some code, it gave you some code, I don’t see what is the problem, it worked as it should, and it’s astonishing that current technology can do this.

        I also don’t like the term “Artificial Intelligence”, we should call these things LLM, or ML as Machine Learning.

          • ReakDuck@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            I think the topic should change from A.I. to machines or smth.

            Doesn’t matter if its a simple bot regulating something simply or a LLM. For some dumb reason the are both called A.I.

            Real Artificial Intelligence doesn’t exist yet.

            • Daxtron2@startrek.website
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              11 months ago

              Because they are both AI. Artificial intelligence and Artificial General intelligence are not the same thing. AI is an entire field of computer science dating back to the very beginnings.

              • Hamartiogonic@sopuli.xyzOP
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                11 months ago

                People expect the current models to be sentient, conscious etc. but we’re still very far from that. All of our ML creations are still very narrow in their scope.

        • Engywuck@lemm.ee
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          It gives a lot of plainly wrong answers, including in fields where one would expect it to excel (basic physics, for instance).

          • Sonori@beehaw.org
            link
            fedilink
            arrow-up
            13
            arrow-down
            1
            ·
            11 months ago

            Why would a program that outputs sudorandom strings based on how often they appeared after the string before excel at basic physics, even if it did have most of the text side of the internet fed into it to determine how often string b follows string a? It’s a miracle of programing and self reinforcement that it can form a sentence at all.

          • Deceptichum@kbin.social
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            11 months ago

            It also gives a lot of right answers.

            The point is not to blindly believe either and use it as a tool to further research.

            • Hamartiogonic@sopuli.xyzOP
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              Works with online forums and real life interactions too. Just because someone said or wrote something doesn’t automatically make it reliable.

              • Trainguyrom@reddthat.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                So you’re saying not to blindly trust everything written on a forum is true. Which means I shouldn’t trust that anything on Lemmy is true. Which means your comment on Lemmy saying not to trust everything on forums to be true must be false. Therefore everything including your comment is true, which means everything is false which means everything must be true…

                It’s all become quite clear! I…need a nap

        • hitmyspot@aussie.zone
          link
          fedilink
          arrow-up
          3
          arrow-down
          5
          ·
          11 months ago

          Meh, we call people intelligent and they give wrong answers confidently too. It’s not AI in the traditional sense, but AI has now come to mean LLM for non tech literate users. Language evolves. We don’t need to fight it.

          • infeeeee@lemm.ee
            link
            fedilink
            arrow-up
            5
            ·
            11 months ago

            How will you call the “AI in traditional sense” when we finally arrive to that? You can’t call that AI, because that’s LLM now.

            I’m not against the natutral development of languages, I simply don’t like mislabeling things.

            • hitmyspot@aussie.zone
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              Already AI is categorized. Likely it will be called true AI or complex AI or something similar. Like any technology, eventually the newer version will replace the old and it will just be called AI again.

            • Hamartiogonic@sopuli.xyzOP
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              What people mean by AI has changed. When Optical Character Recognition was new, that was considered AI. Nowadays it’s so common that it’s nothing special any more. When people talk about AI in the 2020s, they usually exclude OCR from the definition.

            • averyminya@beehaw.org
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              It will likely either be called some different term like NPA (Neural Processing Agent which I just made up), or it will just get the AI term again and the colloquialism for LLM’s being known as AI will fade. It happens for all sorts of other things, and context also depends a lot. CPU used to be a colloquialism for computer, now CPU’s themselves are more commonly used and so the term has faded, outside of random SEO sales sites.

              It’s not really mislabeling, it is colloquial language. Society has ebbs and flows, the only thing ever wrong with these IMO is a failure of communication due to poor contextual understanding (and delivery). If something is explained in a proper sentence, it usually doesn’t much matter if you’re using academic language or colloquial terms. That is to say, you are absolutely right - the distinction is important, but we have the ability to code switch between average conversations that still get the point across and ones which use the exact specific terms and nitpick every single wrong word and why…

              Also, I believe it’s currently called AI specifically because they are masking it as an artificial intelligence. (Not that that makes it what the term was first coined to be) Bard, a named code execution using LLM’s trying to speak in natural language presenting information… It’s artificial, it’s intelligence, it’s presenting itself as such! /s-ish (this isn’t what I believe, it’s what I believe is how it’s being presented). This is more personified than ChatGPT or Bing Chat, but it’s not the first time we’ve had Chatbots get called AI.

              Moreover, I think the term artificial intelligence is fairly broad from a lingual perspective and extremely narrow from its creators point of view (which was extremely similar to how current “AI” functions today - input image recognition output responses, that machine just happened to be analog and ours are digital). So generative AI is almost more accurately called AI than anything else, especially LLM models. But that’s kind of limiting too considering the vast amount of science fiction that has explored AI that is explicitly more than being able to recognize whether the presented image is a circle or a square.

              I’m with you though, we should call LLM’s as they are, generative imaging is just that, but if they are put together and I can have what feels like a conversation and it can show me pictures of what it’s referencing… I’m not going to nitpick the nuances of what this AI is comprised of, I’m just going to call it an AI. What the internal functions are can change, it’s still… AI. Just like how if I were in front of security cameras and I was talking with a coworker about how there’s a technology that can track moving objects and label various things, I’m likely not going to use the specific terms that are compromised of algorithms referencing image models… It’s an AI that can identify things from live video. (mythicalAI for more on that)

              So, all in all I think it really comes down to a contextual presentation and the fact that artificial intelligence, by nature, is a series of constructions. It seems to me that there inherently cannot be a single “AI” because we have shown that there are a vast number of ways to reach artificial intelligences. And what AI “really is” changes based on who wrote about it or who manufactured it.

  • Bourff@lemmy.world
    link
    fedilink
    arrow-up
    36
    arrow-down
    1
    ·
    11 months ago

    If you blindly follow whatever it tells you, you deserve whatever happens to you and your computer.

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      Totally agree. If you learn by nuking your system, you aren’t going to forget that lesson very easily. Fortunately though, there are also nicer ways to learn.

  • okamiueru@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    11 months ago

    Filed under: “LLMs are designed to make convincing sentences. Language models should not be used as knowledge models.”

    I wish I got a dollar every time someone shared their surprise of what a LLM said that was factually Incorrect. I wouldn’t need to work a day.

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      11 months ago

      People expect a language model to be really good at other things besides language.

      If you’re writing an email where you need to express a particular thought or a feeling, ask some LLM what would be a good way to say it. Even though the suggestions are pretty useful, they may still require some editing.

      • kureta@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        This use case and asking for information are completely different things. It can stylize some input perfectly fine. It just can’t be a source of accurate information.It is trained to generate text that sounds plausible.

        There are already ways to get around that, even though they aren’t perfect. You can give the source of truth and ask it to answer using only information found in there. Even then, you should check. the accuracy of its responses.

  • FluffyPotato@lemm.ee
    link
    fedilink
    arrow-up
    23
    ·
    11 months ago

    Oh, yea, it has the habit of pretending to know things. For example i work with a lot of proprietary software with not much public documentation and when asking GPT about it GPT will absolutely pretend to know about it and will give nonsensical advice.

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      10
      ·
      11 months ago

      GPT is riding the highest peak of the Dunning-Kruger curve. It has no idea how little it really knows, so it just says whatever comes first. We’re still pretty far from having a AI capable of thinking before speaking.

        • Hamartiogonic@sopuli.xyzOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          11 months ago

          I sure hope so. That would give me more time to get some interesting stuff done while an AI could handle all the boring administrative tasks I can’t seem to avoid.

  • gerryflap@feddit.nl
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    11 months ago

    Why do people expect it to give perfect answers all the time? You should always question whatever it gives as an answer. It’s not a truth machine, it’s an inspiration machine. It can give you some paths to explore that you hadn’t considered before. It probably isn’t aware that zlib is a dependency for many other things, because that’s extremely niche information. So it just gave you generic advice and an example on how to remove a package.

  • JPSound@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    11 months ago

    I recently asked ChatGPT “what’s a 5 letter word for a purple flower?” It confidently responded “Violet” there’s no surprise it gets far more complex questions wrong.

    • Akisamb@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      These models do not see letters but tokens. For the model, violet is probably two symbols viol and et. Apart from learning by heart the number of letters in each token, it is impossible for the model to know the number of letters in a word.

      This is also why gpt family sucks at addition their tokenizer has symbols for common numbers like 14. This meant that to do 14 + 1 it could not use the knowledge 4 + 1 was 5 as it could not see the link between the token 4 and the token 14. The Llama tokenizer fixes this, and is thus much better at basic algebra even with much smaller models.

  • poweruser@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 months ago

    I have had some luck asking it follow-up questions to explain what each line does. LLMs are decent at that and might even discover bugs.

    You could also copy the conversation and paste it to another instance. It is much easier to critique than to come up with something, and this holds true for AI as well, so the other instance can give feedback like “I would have suggested x” or “be careful with commands like y”

    • throwwyacc@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      This feels like a lot of hoops to avoid reading a wiki page thoroughly But if you want to use gpt this may work

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      I’ve also tried that, but with mixed results. Generally speaking, GPT is too proud to admit its mistakes. Occasionally I’ve managed successfully point out a mistake, but usually it just thinks I’m trying to gaslight it.

      Asking follow up questions works really well as long as you avoid turning it into a debate. When I notice that GPT is contradicting itself, I just keep that information to myself and make a mental note about not trusting it. Trying to argue with someone like GPT is usually just an exercise in futility.

      When you have some background knowledge in the topic you’re discussing, you can usually tell when GPT is going totally off the rails. However, you can’t dive into every topic out there, so using GPT as a shortcut is very tempting. That’s when you end up playing with fire, because you can’t really tell if GPT is pulling random nonsense out of its ass or if what it’s saying is actually based on something real.

  • Rustmilian@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    AI are always giving stupid information.
    Shit be so dumb that I have to trick it into acknowledging that yes, removing x package or writing x C code = computer explode.

    • Hamartiogonic@sopuli.xyzOP
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      It’s even funnier when you start arguing with it about things like this. Some times GPT just refuses to acknowledge its mistakes and sticks to its guns even harder.