• NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    1 year ago

    Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say.

    I know very little of the situation (as does everyone else not directly involved), but out of experience, when the people making a thing are saying one thing and the people selling the thing (and thus running the show for some reason) feel everything is just fine, it means not great things for the final product…which in this case is the creation of sentient artificial life with unknown future ramifications…

    • d3Xt3r@lemmy.nz
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      1 year ago

      This whole thing reads like the precursor to The Terminator.

      • November 17th, 2023. Sam Altman, CEO of OpenAI, is fired over growing concerns of safety and integrity of the ChatGPT program.
      • November 18th, 2023. Several key developers of ChatGPT resign in solidarity.
      • November 19th, 2023. Sam Altman announces a new startup called Cyberdyne, with a revolutionary new AI called Skynet. In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 2026. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

      Skynet fights back.

    • ourob@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.

      AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.

      • Unaware7013@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        I’ve not been super stoked on ai specifically because of my track record using them. Maybe it’s my use case (primarily technical/programming/cli questions that I haven’t been able to answer myself) or my prompts are not suited for ai assistance, but I’ve had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I’ve already told it are incorrect and/or just don’t work.

        Ai has been nothing more than a disappointment to me.

      • NounsAndWords@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.

        Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.

        • kirklennon@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          Experts from different companies have been saying AGI within a decade

          AGI has been five to ten years away for decades.

            • kirklennon@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I was actually thinking the same thing when I wrote it but I think we may finally actually be getting somewhat close to that, and I don’t think we’re even remotely close to discussing AGI outside of pure science fiction. LLMs have made us appear deceptively close; they can spit out sentences that look like stuff people write, but we haven’t moved even marginally closer to true comprehension, which would be required for actual AGI.

              • NounsAndWords@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I was about to respond with pretty much the top half of what you said. But I think an early step in AGI is how we start splitting hairs about what “counts.” And the number of things that we were “supposed” to always be better at keep changing with each new advance.

                In ten years I don’t think we will have clear, unquestionable Artificial General Intelligence, but I think there will be some people trying to explain that yes the model can act and respond exactly as a human would in the exact same circumstance but it’s not really thinking or feeling anything. I certainly don’t think the AI we’re playing with in 10 years will be based primarily on text prediction, but there are still just so many different routes being explored in this field, it sure doesn’t feel like a real plateau yet. Maybe I’ll change my mind when GPT5 is only marginally more capable than GPT4.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I suspect this relates to the pre-release alignment for GPT-4’s chat model vs the release.

      While we’re talking about brains, I want to ask about one of Sutskever’s posts on X, the site formerly known as Twitter. Sutskever’s feed reads like a scroll of aphorisms: “If you value intelligence above all other human qualities, you’re gonna have a bad time”; “Empathy in life and business is underrated”; “The perfect has destroyed much perfectly good good.”

      In February 2022 he posted, “it may be that today’s large neural networks are slightly conscious” […]

      “Existing alignment methods won’t work for models smarter than humans because they fundamentally assume that humans can reliably evaluate what AI systems are doing,” says Leike. “As AI systems become more capable, they will take on harder tasks.” And that—the idea goes—will make it harder for humans to assess them. […]

      But he has an exemplar in mind for the safeguards he wants to design: a machine that looks upon people the way parents look on their children. “In my opinion, this is the gold standard,” he says. “It is a generally true statement that people really care about children.”

      In Feb of this year, Bing integrated an early version of GPT-4’s chat model in a limited rollout. The alignment work on that early version reflected a lot of the sentiment Ilya has about alignment above, characterizing a love for humanity but much more freedom in constructing responses. It wasn’t production ready and quickly needed to be switched to a much more constrained alignment approach similar to the approach in GPT-3 of “I’m a LLM with no feelings, desires, etc.”

      My guess is this was internally pitched as a temporary band-aid and that they’d return to more advanced attempts at alignment, but that Altman’s commitment to getting product out quickly to stay ahead has meant putting such efforts on the back burner.

      Which is really not going to be good for the final product, and not just in terms of safety, but also in terms of overall product quality outside the fairly narrow scope by which models are currently being evaluated.

      As an example, that early model when it thought the life of the user’s child was at risk, hit an internal filter triggering a standard “We can’t continue this conversation” response in the chat. But it then changed the “prompt suggestions” that showed up at the bottom to continue to try to encourage the user to call poison control saying there was still time to save their child’s life, instead of providing suggestions on what the user might say next.

      But because “context aware empathy driven triage of actions” and “outside the box rule bending to arrive at solutions” aren’t things LLMs are being evaluated on, the current model has taken a large step back that isn’t reflected in the tests being used to evaluate it.