The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 months ago

    That is an extremely poor choice of title.

    Title: AI poses no existential threat to humanity – new study finds

    The text:

    The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the premier international conference in natural language processing – reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe.

    The study isn’t saying that AI in general doesn’t pose an existential threat. It’s saying that a particular class of limited software that’s being used to generate images and audio today and act as a chatbot doesn’t post an existential threat.

    Like, this is a “no shit” result. Maybe it’s got some value in that some people might be scared that OpenAI’s stuff is going to haul off and turn into Skynet or something, so maybe it helps to have someone actually make that clear, but in terms of realistic concerns, it’s not about the very-limited stuff that we’re doing right now. It’d be a question about more-sophisticated systems.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      3 months ago

      Came here to say the same. It’s an interesting question what in-context learning can do. But the title is silly. They’re kind of predicting the past. We already know we’re still alive. So… Sure. Past models didn’t have the ability to pose and extential thread. At the same time I’d argue they haven’t been intelligent enough to do serious harm, anyways. That doesn’t really add anything. The extential question is: Will AI be able to progress to that point in the future? We have some reason to think so.