• floofloof
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    In February, Wei’s team announced BitNet 1.58b, in which parameters can equal -1, 0, or 1, which means they take up roughly 1.58 bits of memory per parameter. A BitNet model with 3 billion parameters performed just as well on various language tasks as a full-precision LLaMA model with the same number of parameters and amount of training, but it was 2.71 times as fast, used 72 percent less GPU memory, and used 94 percent less GPU energy. Wei called this an “aha moment.” Further, the researchers found that as they trained larger models, efficiency advantages improved.

    That’s pretty impressive.

    • remotelove
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      5 months ago

      What exactly is a half bit? Is a transistor “half open” or something?

      • floofloof
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        I’m no expert but I think it’s just a statistical measure of information, not something that can be physically realized in isolation. If two possible states are 1 bit and four possible states are 2 bits, then 3 possible states lies somewhere in between.

        I did a bit of a search and found this: How can there be a fraction of a bit?