LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.
“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”
Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.
I’m not entirely surprised by this. Llms are trained on the whole internet and not just the good part. There are groups online that are very vocal about things like the confederates being in the right for example. It would make sense to assume this essentially poisons the datasets. Realistically, no one is contesting history before that time.
Not that it isn’t a problem and doesn’t need fixing, just that it makes “sense”.