When you train an LLM on a strictly curated and verified dataset that is limited in scope, it will do a very good job providing you with information about that specific topic and should hopefully give you the “As an LLM, I don’t know about that…” speil for anything else.
When you let an LLM “do its own research” (e.g. train it on internet content) it starts telling you to put glue on pizza, eat a healthy number of rocks every day, and that you can run in the air as long as you don’t look down.
Maybe they really are already as smart as people. /s
When you train an LLM on a strictly curated and verified dataset that is limited in scope, it will do a very good job providing you with information about that specific topic and should hopefully give you the “As an LLM, I don’t know about that…” speil for anything else.
When you let an LLM “do its own research” (e.g. train it on internet content) it starts telling you to put glue on pizza, eat a healthy number of rocks every day, and that you can run in the air as long as you don’t look down.
Maybe they really are already as smart as people. /s