financetom
Business
financetom
/
Business
/
AI assistants make widespread errors about the news, new research shows
News World Market Environment Technology Personal Finance Politics Retail Business Economy Cryptocurrency Forex Stocks Market Commodities
AI assistants make widespread errors about the news, new research shows
Oct 21, 2025 3:26 PM

*

Research finds AI assistants make errors reporting news

*

Public trust could be eroded, EBU official says

By Olivia Le Poidevin

GENEVA, Oct 22 (Reuters) - Leading AI assistants

misrepresent news content in nearly half their responses,

according to new research published on Wednesday by the European

Broadcasting Union (EBU) and the BBC.

The international research studied 3,000 responses to

questions about the news from leading artificial intelligence

assistants - software applications that use AI to understand

natural language commands to complete tasks for a user.

It assessed AI assistants in 14 languages for accuracy,

sourcing and ability to distinguish opinion versus fact,

including ChatGPT, Copilot, Gemini and Perplexity.

Overall, 45% of the AI responses studied contained at least

one significant issue, with 81% having some form of problem, the

research showed.

Reuters has made contact with the companies to seek their

comment on the findings.

Gemini, Google's AI assistant, has stated previously on

its website that it welcomes feedback so that it can continue to

improve the platform and make it more helpful to users.

OpenAI and Microsoft ( MSFT ) have previously said hallucinations -

when an AI model generates incorrect or misleading information,

often due to factors such as insufficient data - are an issue

that they are seeking to resolve.

Perplexity says on its website that one of its "Deep

Research" modes has 93.9% accuracy in terms of factuality.

SOURCING ERRORS

A third of AI assistants' responses showed serious sourcing

errors such as missing, misleading or incorrect attribution,

according to the study.

Some 72% of responses by Gemini, Google's AI assistant, had

significant sourcing issues, compared to below 25% for all other

assistants, it said.

Issues of accuracy were found in 20% of responses from all

AI assistants studied, including outdated information, it said.

Examples cited by the study included Gemini incorrectly

stating changes to a law on disposable vapes and ChatGPT

reporting Pope Francis as the current Pope several months after

his death.

Twenty-two public-service media organisations from 18

countries including France, Germany, Spain, Ukraine, Britain and

the United States took part in the study.

With AI assistants increasingly replacing traditional search

engines for news, public trust could be undermined, the EBU

said.

"When people don't know what to trust, they end up trusting

nothing at all, and that can deter democratic participation,"

EBU Media Director Jean Philip De Tender said in a statement.

Some 7% of all online news consumers and 15% of those aged

under 25 use AI assistants to get their news, according to the

Reuters Institute's Digital News Report 2025.

The new report urged AI companies to be held accountable and

to improve how their AI assistants respond to news-related

queries.

Comments
Welcome to financetom comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Show More Comments
Related Articles >
Copyright 2023-2026 - www.financetom.com All Rights Reserved