LLMs performed best on questions related to legal systems and social complexity, but they struggled significantly with topics such as discrimination and social mobility.
“The main takeaway from this study is that LLMs, while impressive, still lack the depth of understanding required for advanced history,” said del Rio-Chanona. “They’re great for basic facts, but when it comes to more nuanced, PhD-level historical inquiry, they’re not yet up to the task.”
Among the tested models, GPT-4 Turbo ranked highest with 46% accuracy, while Llama-3.1-8B scored the lowest at 33.6%.
That is accurate, but people who design and distribute the LLMs refer to the process as machine learning and use terms like hallucinations which is the primary cause of the confusion.
I think the problem is the use of the term AI. Regular Joe Schmo hears/sees AI and thinks Data from ST:NG or Cylons from Battlestar Galactica and not glorified search engine chatbots. But AI sounds cooler than LLM so they use AI.
The term is fine. Your examples are very selective. I doubt Joe Schmo thought the aimbots in CoD were truly intelligent when he referred to them as AI.