• mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    OpenAI at least is now attempting to bolt on a “memory” by having the LLM spit out short snippets of what it might need to know later, which it then has access to when completing later prompts. Like everything else post-GPT-4, it seems fine but doesn’t work really all that well at what it is intended to do.