• 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Newer Pixels are having hardware chips dedicated to AI in them, which could be able to run these locally. Apple is planning on doing local LLMs too. There’s been a lot of development on “small LLMs”, which have a ton of benefits, like being able to study LLMs easier, run them on lower specs, and saving power on LLM usage.

    • httpjames@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Smaller LLMs have huge performance tradeoffs, most notably in their abilities to obey prompts. Bard has billions of parameters, so mobile chips wouldn’t be able to run it.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        That’s right now, small LLMs have been the focus of development just very recently. And judging how fast LLMs have been improving, I can see that changing very soon.