• 0laura@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        2 months ago

        true. it kinda sucks seeing people argue against ai when they don’t understand it. like, there’s many things to criticize about ai, I’m not saying they’d like ai if they knew more about it. I just wish their hatred was more educated.

        • PeriodicallyPedantic
          link
          fedilink
          arrow-up
          3
          ·
          2 months ago

          Idk, wrt Lora specifically. I agree in general it’d be good if haters of anything were more educated in that thing, but they probably don’t need to be that educated, especially in niche topics.

          I’d consider myself an AI hater.
          My job is also building ai-assisted tools.
          But also I don’t need to understand how the AI works to understand it’s use, beyond a bit of prompt engineering.

          Don’t ask about my job satisfaction 😭

      • 0laura@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        I’m talking about LoRA, not LoRa. I’m a fan of both though. I’ve been considering getting a Lilygo T-Echo to run Meshtastic for a while. Maybe build a solar powered RC plane and put a Meshtastic repeater in there, seems like a cool project.

        https://en.wikipedia.org/wiki/Fine-tuning_(deep_learning) Low-rank adaptation (LoRA) is an adapter-based technique for efficiently fine-tuning models. The basic idea is to design a low-rank matrix that is then added to the original matrix.[13] An adapter, in this context, is a collection of low-rank matrices which, when added to a base model, produces a fine-tuned model. It allows for performance that approaches full-model fine-tuning with less space requirement. A language model with billions of parameters may be LoRA fine-tuned with only several millions of parameters.