• clb92@feddit.dk
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    People have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?

    • Even_Adder@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      3 months ago

      Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.

      • clb92@feddit.dk
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Oh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊

      • erenkoylu@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        3 months ago

        quite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).