I see ads for paid prompting courses a bunch. I recommend having a look at this guide page first. It also has some other info about LLMs.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    18 days ago

    Is prompt engineering even a scientifically backed discipline? Or is this some esoterics like homeopathy? I mean sure, putting in the correct prompt is crucial in making LLMs perform well. But this highly depends on the exact model and how it was fine-tuned. And it takes quite some effort and a methodical approach to test these things. I wonder if these courses consist of anecdotal evidence. Or if people actually studied and tested their advice… Because a lot of what I read in internet forums and alike, is trial and error, everyone has their own truths and lore…

    • Smorty [she/her]@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 days ago

      You are completely right and it is mostly about trial and error. I’d assume these courses mainlyl teach things you can do with the big bois, those being by the obvious big evil AI companies. It’s very much an overblown topic and companies pretend it’s actually a fancy thing to learn and be good at.

      The linked guide just explains the basic concepts of few shot prompting, CoT and RAG and stuff. Even these terms I feel, make the topic more complicated than it is. Could literally be summarized to

      • Use examples of what you want
      • Use near-zero temperature for almost everything
      • For complex tasks, tell it to provide its internal thought proccess before providing the answer (or just use the QwQ model)
      • maybe SCREAM AT THE LLM IN ALLCAPS if something is really important
      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        18 days ago

        I skimmed the link you provided. Yes, that seems to include solid advice. Good for beginners, nothing new to me, since I (somewhat) followed the AI hobby enthusiast community since LLaMA1. But I have to look up what writing all caps does, I suppose that severely messes with the tokenizer?! But I’ve seen the big companies do this, too, in some of the leaked prompts.

        And I guess with the “early” models from 2023 and before, it was much more important to get the prompts exactly right. Not confuse it etc. That got way better as models improved substancially, and now these models (at least) get what I want from them almost every time. But I think we picked the low hanging fruits and we can’t expect the models itself to improve as fast as they did in the past. So it’s down to prompting strategies and other methods to improve the performance of chatbots.