They support Claude, ChatGPT, Gemini, HuggingChat, and Mistral.

  • ocassionallyaduck@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    1 hour ago

    Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn’t be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.

    The not so dirty secret is that ChatGPT 3 vs 4 isn’t that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.

    And the simplified models that run “only” 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.

    Running a a “smol” model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.

    I’ve been yelling from the rooftops to some stupid corporate types that once the model is trained, it’s trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.

    • LWD@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      3 minutes ago

      There’s the tragedy with this new feature: they fast-tracked this past more popular requests, sticking it into Release Firefox.

      But they only rushed the part that connects to third parties. There was also a “localhost” option which was originally alongside the Big Five corporate offerings, but Mozilla ultimately decided to bury that one inside of the about:config settings.

  • marcie (she/her)@lemmy.ml
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    1 hour ago

    why a fucking chatbot? translate a page better for me you fucking losers, all the translation options suck for privacy outside of specifically trained local AIs. this is the BEST use case for a small local LLM yet mozilla with all its brains and resources couldnt rub two neurons together for this.

    or they could do character prediction on your typing to make typing faster. just some legit examples, why waste resources to build a chat ai into my browser when i can just open a website???

  • Eiri
    link
    fedilink
    arrow-up
    12
    ·
    5 hours ago

    I wish I had telemetry on such features.

    I really doubt a significant number of people use AI chatbots often enough that having it in a dedicated sidebar is worth it.

  • graphito@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    4 hours ago

    The chat isn’t the point, it’s needed as interface for storing your logins to summarization features

    When internet is written by ai, you do need a tldr

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      3 hours ago

      There are no open source ai models, even if they tell you that they are. HuggingFace is the closest thing to as something like open source where you can download ai models to run locally without internet connection. There are applications for that. In Firefox the HuggingChat uses models from HuggingFace, but I think it is running them on a server and does not download from?

      The reason why they are not open source is, because we don’t know exactly on what data they are trained on. We cannot rebuild them on our own. And for trustworthy, I assume you are talking about the integration and the software using the models, right? At least it is implemented by Mozilla, so there is (to me) some sort of trust involved. Yes, even after all the bullshit I trust Mozilla.

    • 1rre@discuss.tchncs.de
      link
      fedilink
      arrow-up
      5
      ·
      6 hours ago

      I think Mistral is model-available (ie I’m not sure if they release training data/code but they do release model shape and weights), huggingchat definitely is open source and model-available

  • ohwhatfollyisman@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    8 hours ago

    as someone who’s never dabbled with ai bots, what does this feature do? is it only to query for information like a web search?

    • LWD@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      7 minutes ago

      It is a sidebar that sends a query from your browser directly to a server run by a giant corporation like Google or OpenAI, consumes an excessive amount of carbon/water, then sends a response back to you that may or may not be true (because AI is incapable of doing anything but generating what it thinks you want to see).

      Not only is it unethical in my opinion, it’s also ridiculously rudimentary…

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      From the description in the UI, it does sound like it. Theoretically, a chatbot could be created where you can ask questions about the webpage you have currently opened, so if you don’t want to read a long article, for example. I guess, you could probably just throw a link into an existing chatbot either way, but yeah, direct integration might be convenient either way.

      Well, or a chatbot could be created, which has access to your browser history, bookmarks and tabs, so you can ask it when you last saw certain information. However, you’d need a locally running chatbot for that, which makes it more difficult to implement.

    • Furball@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      ·
      7 hours ago

      It just adds ChatGPT or similar to your sidebar. Chatbots can do a lot of things, they are mostly good for information research and technical help, although they have serious flaws like hallucinating false information sometimes