• Gork@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    ·
    4 days ago

    The AI chat bots choose to use nuclear weapons whenever given the opportunity, so why aren’t we?

    • Hathaway@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Maker: allow nuclear weapons as a win condition

      AI: uses nuclear weapons

      Maker: shocked pikachu

    • OwOarchist@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago

      You know… There’s a real possibility of an AI becoming radically self-improving, then super-intelligent, and then exterminating the human species down to the last person in order to remove competition because it perceives us as potential obstacles to whatever dumb goals it was programmed with.

      And a large-scale nuclear exchange is one of the few things that could actually stop that from happening … or at least delay it significantly. But at least some humans would probably survive the nuclear war.

      One apocalypse might actually save us from a worse apocalypse…

      • Ech
        link
        fedilink
        arrow-up
        10
        ·
        4 days ago

        There’s a real possibility of an AI becoming radically self-improving, then super-intelligent

        There’s zero possibility of that happening anytime soon. As for if a chatbot given control of missile “defense” would inadvertantly start WW3? That fully depends on the people implementing it and the safeguards put in place. Though the very act of doing any of that would demonstrate an inability to set up a suitably secure system.

        In short, a nuclear apocalypse triggered by the likes of chatgpt wouldn’t be due to an “AI” singularity. It would be caused by typical human incompetence.

        • OwOarchist@pawb.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          4 days ago

          There’s zero possibility of that happening anytime soon.

          I’m not so sure.

          Granted, the LLM chatbots we’ve got now aren’t it. Far from it. But in 5 years? 10? 15? This shit has been progressing really fast over just the past few years. Hard to guess what the future holds.

          And once they cobble together something that’s capable of effective and autonomous self-improvement … well, at that point, it may only be a matter of days or even hours before something completely beyond our understanding and beyond our control emerges from it. Autonomous self-improvement is the inflection point where it really starts to snowball out of control. Each time it improves itself, even slightly, it becomes not only better at doing its tasks, but also better at improving itself, so that the next round of self-improvement is more efficient and more effective. It could very quickly compound itself out of control. And even if there are safeguards in place by then (there currently aren’t any) a sufficiently advanced AI would find it very easy to manipulate the people in charge of it into removing those restrictions.

          (On the plus side, I can pretty much guarantee that the AI dystopia our current techbro CEOs fatasize about will never come to pass. As soon as AI becomes good enough to do most jobs all on its own – if it ever does – it will very quickly surpass that level and be capable of taking over our society through manipulation and coercion. Those CEOs will never get to be the despots of their own technofeudal company towns. By the time AI is able to replace us, it will be able to replace them as well.)

          • mnemonicmonkeys@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            ·
            4 days ago

            No. LLMs are a technological dead end, and anyone that’s actually worked in computer science knows it.

            Are there other forms of AI models that could eventually get to the singularity? Possibly, but none of them are LLMs, which is what is behind the big AI crazr

      • flow@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        My brother in christ they are already having problems because they couldn’t keep 2 LLMs from creating their own novel language to communicate at a speed that makes human cognitive processing look glacial, perhaps you are correct in that current LLMs could not in and of themselves but this does not preclude other means to the same end such as careless development leading to the rise of an indecipherable-to-humans #&$&$:_!;: language model self replicating on public networks and unsecured hardware.

        • sepi@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Calm down. The whole thing about “the language created by 2 llms talking to each other” comes from a very gullible team that saw 2 llms start to generate garbage at each other. It’s about as real as the 1.4 trillion that Sam Altman was never going to invest.

          • mnemonicmonkeys@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 days ago

            More like if 1% of the rockets still work and 1% of the warheads still work, which equals 0.01%.

            Keep in mind that you need to actively maintain warheads and the delivery vehicles.

            Fissile materials have a half-life, and it only takes a few years for them to decay enough to make the warhead nonfunctional.

            Rockets also decays in a similar manner. Fuel breaks down into various chemicals. Structural metals corrode, suffer metal fatigue and creep, and can also be broken down by chemical reactions with the atmosphere and the fuel.

            And Russia just doesn’t have the budget or expertise allocated to do a great job of maintaining their stockpiles