This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer’s recent switch to doomerism with his new phrases of “shut it all down” and “AI alignment is too hard” and “we’re all going to die”.

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)… which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

  • Evinceo@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s amazing that one can use so many words to describe such a simple concept. Astonishing lack of economy. I am not going to read or even skim that, I just thought it was funny that he was like ‘I won, see?’ and his proof is a dead link.