If a person says they believe an objectively false statement, AIs tend to agree with them – and the problem seems to get worse as models get bigger

  • @SGforce
    link
    39 months ago

    They are tuned to agree with you in the first place. It would make sense that even if they can tell that the answer is wrong they will agree with you anyway. I don’t know if that makes it a lie but it is deliberate in a sense. I would argue that it’s like not wanting to put up a fuss rather than trying to trick you.

    But I’ve seen weird stuff man. I think we won’t even realise what we’re seeing when newer iterations of this tech actually do start…being.

    • SokathHisEyesOpen
      link
      fedilink
      29 months ago

      Why would that make sense? Why would we build tools that continue to reinforce that feelings are more important than facts? An AI should be objective, not obsequious.

      • @SGforce
        link
        19 months ago

        It’s a tool. It’s supposed to agree to your commands first and foremost. Getting it right comes after. And we aren’t there yet.