• tyler@programming.dev
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    6 months ago

    LLMs have been shown to have emergent math capabilities (that are the opposite of what is trained) so you’re simplifying way too much. Yes a lot is just “predictive text” but there’s a ton of “this was not the training and we don’t know how it knows this” as well.

    • anachronist@midwest.social
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      Game of Life has cool emergent properties that are a lot more interesting and fun to play with than LLMs. LLMs also have emergent properties like, for instance, failing classification due to the manipulation of individual image pixels.