• Rabbit R1, AI gadget, runs on Android app, not requiring “very bespoke AOSP” firmware as claimed by Rabbit.
  • Rabbit R1 launcher app can run on existing Android phones, not needing system-level permissions for core functionality.
  • Rabbit R1 firmware analysis shows minimal modifications to standard AOSP, contradicting claims of custom hardware necessity by Rabbit.
  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    7 months ago

    You won’t like anything on offer currently except for that which is entirely self hosted.

    • ⓝⓞ🅞🅝🅔
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      What are a couple of the best self hosted options? I wouldn’t mind giving it a go on my server. I may not have enough juice with what I currently run, but perhaps as a proof of concept first and then new hardware later.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        I haven’t used it and only heard about it while writing this post, but Open WebUI looks really promising. I’m going to check it out the next time I mess with my home server’s AI apps. If you want more options, read on.

        Disclaimer: I’ve looked into most of the options below enough to feel comfortable recommending them, but I’ve only personally self hosted the Automatic 1111 webui, the Oobabooga webui, and Kobold.cpp.

        If you want just an LLM and an image generator, then:

        For the image generator, something that leverages Stable Diffusion models:

        And then find models that you like at Civitai.

        For the LLM, the best option depends on your hardware. Not knowing anything about your hardware, I recommend a llama.cpp based solution. Check out one of these:

        Alternatively, VLLM is allegedly the fastest for multi-user CPU-based inference, though as far as I can tell it doesn’t have its own webui (but it does expose OpenAI compatible API endpoints).

        And then find a model you like at Huggingface. I recommend finding a model quantized by TheBloke.

        There are a couple communities not on Lemmy that discuss local LLMs - r/LocalLLaMA and r/LocalLLM for example - so if you’re trying to figure out which model to try, that’s a good place to check.

        If you want a multimodal AI, you can use llama.cpp with a model like LLAVA. The options below also have multimodal support.

        If you want an AI assistant with expanded capabilities - like searching your documents or the web (RAG), etc. - then I don’t have a ton of experience there, but these seem to do that job:

        If you want to use your local model as more than just a chat bot - integrating it into your IDE or a browser extension - then there are options there, and as far as I know every LLM above can be configured to expose an API allowing it to be used by your other tools. Some, like Open WebUI, expose OpenAI compatible APIs and so can be used with tools built to be used with OpenAI. I don’t know of many tools like this, though - I was surprisingly not able to find a browser extension that could use your own API, for example. Here are a couple examples:

        Also, I found this Medium article listed some of the things I described above as well as several others that I’d never heard of.

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 months ago

        That’s a deep rabbit hole (no pun intended). I know it’s blasphemy to mention the other site around here, but check out the r/locallama subreddit. It covers more models than just LLaMA. There are literally thousands of variations at this point, so preferences are quite subjective based on your use case and your best bet is just to begin researching on your own for your intended purposes and available resources. Huggingface is the main model repository, as well.

        • Balder@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Yeah, the best way out of it is to get a few of the most recommended ones and test by yourself.