Study reveals leading AI assistants distort news content
Recent research from the European Broadcasting Union (EBU) and the BBC reveals that leading AI assistants misrepresent news in almost half of their responses. The study, released on Wednesday, analyzed 3,000 responses from various AI platforms regarding news content.
Findings of the Study
The investigation scrutinized AI tools, including ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity. It assessed their accuracy, sourcing credibility, and ability to distinguish opinion from fact across 14 languages.
- 45% of AI responses had at least one significant error.
- 81% of the analyzed responses contained some inaccuracies.
- About 72% of responses from Gemini faced notable sourcing issues.
- 20% of all AI assistant responses included outdated information.
Usage of AI Assistants for News Consumption
According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers use AI assistants for their news, with 15% of users under 25 relying on them. The study’s findings highlight potential risks associated with increased reliance on these tools for news accuracy.
Specific Issues Identified
Examples of inaccuracies include:
- Gemini incorrectly altered legal information regarding disposable vapes.
- ChatGPT misidentified Pope Francis as the reigning pope months after his passing.
Responses from AI Companies
Companies have acknowledged the importance of improving their systems. Google’s Gemini expressed a willingness to incorporate user feedback for enhancements. OpenAI and Microsoft have noted that ‘hallucinations,’ or the generation of misleading information, are challenges they aim to address. Perplexity claims a 93.9% factual accuracy in one of its modes but still shows issues.
Legal Actions and Accountability
In related news, several Canadian news organizations are suing OpenAI for alleged copyright violations. They contend that their content was unjustly used to train AI systems like ChatGPT.
The Path Forward
The EBU emphasizes that as AI tools become more prevalent in news dissemination, maintaining public trust is crucial. The report calls for AI developers to ensure accountability akin to that seen in traditional news organizations, highlighting the need for transparent processes to identify and correct errors effectively.
As stakeholders work towards improving AI assistant performance, the integrity of news updates remains a pressing concern. The demand for accuracy and source reliability will ultimately shape the future of news consumption through these technologies.