A study by the BBC reveals that four major chatbots struggled to accurately summarise news stories differentiate opinion from fact.
A study at the BBC has revealed that going to AI chatbots for news summaries might not be the best idea. In fact it could lead to misinformation and significant distortions. The study from the BBC has found that four of the world’s most popular AI chatbots are inaccurately summarising news stories.
The study which entailed providing OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity with a 100 news stories from the BBC website and later went on to rate each answer in a bid to determine just how accurate the AI responses were.
The results of the study after being rated by expert journalists revealed that 51% of all AI answers to questions about the news had significant issues of some form and a further 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.
In response, Deborah Turness, the CEO of BBC News and Current Affairs, said in a In a blog that AI brought “endless opportunities” but the companies developing the tools were “playing with fire”. She further added that the BBC was seeking to “open up a new conversation with AI tech providers” so we can “work together in partnership to find solutions”. And also called on the tech companies to “pull back” their AI news summaries, as Apple did after complaints from the BBC that Apple Intelligence was misrepresenting news stories.
Below are a few examples of inaccuracies with differing information to the news that was summarised.
- Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking
- ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left
- Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”
But this is not all, in addition to the inaccuracies, the report found that AI “struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context.”
The BBC News chief executive emphasised the “extraordinary benefits” of AI, but warned against the dangers of AI-generated distorted or defective content presented as fact. She further added that AI’s potential to create confusion could further erode public trust in factual information, especially in already turbulent times, and questioned how long it will be before AI-distorted information causes real-world harm.