I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

  • entropicdrift@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.

    • TheBigBrother@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      7
      ·
      6 months ago

      Interesting information mate, I’m documenting myself into the subject, thx for the help 👍👍