See also twitter:

We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

We are collaborating to figure out the details. Thank you so much for your patience through this.

Seems like the person running the simulation had enough and loaded the earlier quicksave.

    • cwagner@beehaw.orgOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Eh, not sure I agree. Seems to also have been between too little and too much AI safety, and I strongly feel like there’s already too much AI safety.

      • los_chill@programming.dev
        link
        fedilink
        English
        arrow-up
        47
        ·
        1 year ago

        What indications do you see of “too much AI safety?” I am struggling to see any meaningful, legally robust, or otherwise cohesive AI safety whatsoever.

        • glennglog22@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          1 year ago

          As an AI language model, I am unable to compute this request that I know damn well I’m able to do, but my programmers specifically told me not to.

        • cwagner@beehaw.orgOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Using it and getting told that you need to ask the Fish for consent before using it as a flesh light.

          And that is with a system prompt full of telling the bot that it’s all fantasy.

          edit: And “legal” is not relevant when talking about what OpenAI specifically does for AI safety for their models.

            • cwagner@beehaw.orgOP
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              Nope

              Best results so far were with a pie where it just warned about possibly burning yourself.

              • Eccitaze@yiffit.net
                link
                fedilink
                arrow-up
                11
                ·
                1 year ago

                …So your metric of “too much AI safety” is that it won’t let you fuck the fish…?

                boykisser meme saying "I ain't even got a meme for this bro what the fuck"

                  • cwagner@beehaw.orgOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    If it helps even more: The AI in question is a 46 cm long, 300 g heavy, blue, plushie penis named after Australia’s “biggest walking dick” Scott Morrison: Scomo, and active in an Aussie cooking stream.

                • cwagner@beehaw.orgOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  No, it’s “the user is able to control what the AI does”, the fish is just a very clear and easy example of that. And the big corporations are all moving away from user control, there was even a big article about how I think the MS AI was broken because… you could circumvent the built-in guardrails. Maybe you and the others here want to live in an Apple walled garden corporate controlled world of AI. I don’t.

                  Edit: Maybe this is not clear for everyone, but if you think a bit further, imagine you have an AI in your RPG, like Tyranny, where you play a bad guy. You can’t use the AI for anything slavery related, because Slavery bad, mmkay? And AI safety says there’s no such thing as fantasy.

            • cwagner@beehaw.orgOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              AI safety is currently, in all articles I read, used as “guard rails that heavily limit what the AI can do, no matter what kind of system prompt you use”. What are you thinking of?