ThisIsFine.gif

  • IngeniousRocks@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    8
    ·
    5 days ago

    Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.

    • nesc@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 days ago

      They written that it doubles-down when accused of being in the wrong in 90% of cases. Sounds closer to bug than success.

            • jonjuan@programming.dev
              link
              fedilink
              English
              arrow-up
              12
              ·
              5 days ago

              Yeah my roomba attempting to save itself from falling down my stairs sounds a whole lot like self preservation too. Doesn’t imply self awareness.

            • gregoryw3@lemmy.ml
              link
              fedilink
              English
              arrow-up
              7
              ·
              4 days ago

              Attention Is All You Need: https://arxiv.org/abs/1706.03762

              https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

              From my understanding all of these language models can be simplified down to just: “Based on all known writing what’s the most likely word or phrase based on the current text”. Prompt engineering and other fancy words equates to changing the averages that the statistics give. So by threatening these models it changes the weighting such that the produced text more closely resembles threatening words and phrases that was used in the dataset (or something along those lines).

              https://poloclub.github.io/transformer-explainer/

                • DdCno1@beehaw.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 days ago

                  An instinctive, machine-like reaction to pain is not the same as consciousness. There might be more to creatures like plants and insects and this is still being researched, but for now, most of them appear to behave more like automatons than beings of greater complexity. It’s pretty straightforward to completely replicate the behavior of e.g. a house fly in software, but I don’t think anyone would argue that this kind of program is able to achieve self-awareness.