Reversal knowledge in this case being, if the LLM knows that A is B, does it also know that B is A, and apparently the answer is pretty resoundingly no! I’d be curious to see if some CoT affected the results at all

  • kfet
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    That’s a logical fallacy. Given A is B it does not follow that B is A.

    edit: it would make sense if it was phrased as “A is equivalent to B”. Saying “A is B” in a scientific context has a very specific meaning. Makes me wonder how trustworthy the paper itself is.

    • noneabove1182@sh.itjust.worksOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m not really sure I follow, it’s just a simplification, the most appropriate phrasing I guess would be “given A belongs to B, does it know B ‘owns’ A” like the examples given with “A is the son of B, is B the parent of A”

      • kfet
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Looks like the findings are specifically about out-of-context learning, i.e. fine-tuning on facts like “Tom Cruise’s mother was Mary Lee Pfeiffer” is not enough to be able to answer a questions like “Who are the children of Mary Lee Pfeiffer?”, without any prompt engineering/tuning.

        However, if you have in the context something like “Who was Tom Cruise’s mother?”, then the LLM has no problem answering correctly “Who are the children of Mary Lee Pfeiffer?”, listing all the children, including Tom Cruise.

        Note that it would be confusing even to a human to ask “Who is the son of Mary Lee Pfeiffer?”, which is what they test on, since the lady had more than one son. That was the point of my comment, it’s just a misleading question.

        But that’s not the issue in general that the researchers have unearthed, as I assumed based on the “A is B” summary, so yeah, it’s just a poor choice of wording.