Groundbreaking study reveals AI assistants frequently distort news content 45 percent of the time worldwide
A recent study coordinated by the European Broadcasting Union (EBU) and conducted by the BBC has unveiled significant issues related to AI assistants and their handling of news content. This expansive research assessed the performance of several AI platforms, demonstrating that inaccuracies are prevalent across various languages and regions.
Key Findings from the Study
The groundbreaking investigation was initiated at the EBU News Assembly in Naples. It involved 22 public service media organizations from 18 countries, analyzing over 3,000 responses generated by popular AI assistants including ChatGPT, Copilot, Gemini, and Perplexity. Professional journalists evaluated these responses against criteria such as accuracy, sourcing, and the distinction between opinion and fact.
Statistical Overview
- 45% of AI-generated responses contained at least one significant error.
- 31% displayed serious sourcing issues, including incorrect or missing attributions.
- 20% included major accuracy problems, such as hallucinations and outdated facts.
- 76% of Gemini’s responses were found to have significant issues, significantly higher than its counterparts.
Importance of Accurate Information
The rising reliance on AI assistants poses concerns regarding public trust in news. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers rely on AI platforms, a figure that jumps to 15% among individuals under 25. EBU Media Director Jean Philip De Tender emphasized the systemic nature of these issues, warning that widespread distrust could diminish democratic engagement.
Responses from Media Leaders
Peter Archer, BBC Programme Director for Generative AI, expressed optimism about AI’s potential benefits. However, he stressed the need for trustworthy outputs from these technologies to maintain audience confidence. Despite some advancements, he acknowledged persistent concerns about reliability.
Future Actions and Solutions
The research team has introduced the News Integrity in AI Assistants Toolkit. This toolkit aims to enhance AI responses and improve media literacy among users. It addresses critical questions such as what constitutes a good AI response and what issues need rectification.
Advocacy for Regulation
The EBU and its members are urging European and national regulators to uphold existing laws regarding information integrity and digital services. Continuous independent monitoring of AI assistants is deemed essential, given the rapid evolution of AI technology.
Background of the Research
This study builds upon earlier research from the BBC published in February 2025, which first spotlighted the challenges AI has with news content. The latest findings confirm that these problems are not confined to specific languages, markets, or AI assistants.
Participating Broadcasters
- Belgium: RTBF, VRT
- Canada: CBC-Radio Canada
- Czechia: Czech Radio
- Finland: YLE
- France: Radio France
- Germany: ARD, ZDF, Deutsche Welle
- Italy: Rai
- Netherlands: NOS/NPO
- Norway: NRK
- Spain: RTVE
- Sweden: SVT
- United Kingdom: BBC
- USA: NPR
The findings of the study indicate a pressing need for both media organizations and AI developers to collaborate in order to enhance the quality and reliability of AI-generated news content, ultimately protecting public trust in media.