• Petter1@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)