• Spuddlesv2
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    8 months ago

    Ahhh so the secret to using ChatGPT successfully is to tell it to give you good output?

    Like “make sure the code actually works” and “don’t repeat yourself like a fucking idiot” and “don’t hallucinate false information”!

    • Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      8 months ago

      Unironically yes, sometimes. A lot of the best works which its training samples are based on cites the original poster’s qualifications, and this filters into the model where asking for the right qualifications directly can influence it to rely more on high quality input samples when generating its response.

      But it’s still not perfect, obviously. It doesn’t make it stop hallucinating.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        8 months ago

        Yeah, you still need to give an AI’s output an editing and review pass, especially if factual accuracy is important. But though some may mock the term “prompt engineering” there really are a bunch of tactics you can use when talking to an AI to get it to do a much better job. The most amusing one I’ve come across is that some AIs will produce better results if you offer to tip them $100 for a good output, even though there’s no way to physically fulfill such a promise. The theory is that the AI’s training data tended to have better stuff associated with situations where people paid for it, so when you tell the AI you’re willing to pay it’ll effectively go “ah, the user is expecting good quality.”

        You shouldn’t have to worry about the really quirky stuff like that unless you’re an AI power-user, but a simple request for high-quality output can go a long way. Assuming you want high quality output. You could also ask an AI for a “cheesy low-quality high-school essay riddled with malapropisms” on a subject, for example, and that would be a different sort of deviation from “average.”

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Absolutely, it’s one of the first curious things you discover when using them, such as stable diffusion “masterpiece” or the famous system prompt leaks from proprietary llms

      It makes sense in how it works but in proprietary use it is mostly handled for you

      Finding the right words and amount is a hilarious exercise that provides pretty good insight in the attention mechanics

      Consider the “let’s work step by step”

      This proved a revolutionary way to system the coders as they then will structure the output better, there’s then more research that happened around why this is so amazingly effective at making the model proof check itself

      Predictions are obviously closely related to the action part of our brains as well, so it makes sense that it would help when you think about it

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      8 months ago

      Literally yes.

      For example about a year ago one of the multi step prompt papers that improved results a bit had the model guess what expert would be best equipped to answer the question in the first pass and then asked it to answer the question as that expert in the second pass and it did a better job than trying to answer it directly.

      The pretraining is a regression towards the mean, so you need to bias it back towards excellence with either fine tuning or in context learning.