A new report from the Austrian research institute Complexity Science Hub (CSH) reveals that current AI models struggle to provide accurate historical information. In their study, they conducted an experiment using OpenAI’s GPT-4, Meta’s Llama, and Google’s Gemini to answer historical questions. Unfortunately, these models achieved only a 46% accuracy rate, often providing incorrect data. For instance, GPT-4 erroneously claimed that Ancient Egypt had a standing army, a significant factual error. Researcher Maria del Rio-Chanona pointed out that these inaccuracies stem from the models’ propensity to generalize from more frequently encountered information.
The study also observed that AI models are particularly challenged when dealing with historical data about certain regions, such as sub-Saharan Africa. This suggests that while AI models are capable of processing vast amounts of data, they often fail to offer precise historical context. The ability to generalize information can lead to misconceptions and errors, especially when the data set includes less common historical facts. The conclusion drawn from this study emphasizes the pressing need for enhanced training protocols that can improve AI models’ comprehension of diverse historical perspectives and ensure more accurate information delivery.