I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
  • Greg Clarke
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    I’ve installed Ollama on my Gaming Rig (RTX4090 with 128GB ram), M3 MacBook Pro, and M2 MacBook Air. I’m running Open WebUI on my server which can connect to multiple Ollama instances. Open WebUI has it’s own Ollama compatible API which I use for projects. I’ll only boot up my gaming rig if I need to use larger models, otherwise the M3 MacBook Pro can handle most tasks.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      Is that 128GB of VRAM? Because normal RAM doesn’t matter unless you want to run the model on the CPU, which is much slower.

      • Greg Clarke
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        That’s 128GB RAM, the GPU has 24GB VRAM. Ollama has gotten pretty smart with resource allocation. Smaller models can fit soley on my VRAM but I can still run larger models on RAM.

        • JackGreenEarth@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Any tips on how to get stable diffusion to do that? I’m running it through Krita’s AI Image Generation plugin, and with my 6GB VRAM and 16GB RAM, the VRAM is quite limited if I want to inpaint larger images, I keep getting ‘out of VRAM’ errors. How do I make it switch to RAM when VRAM is full? Or with Jan for that matter, how can I get it to partially use RAM and partially VRAM so I can get it to run models larger than 7B?