fake news

The chatbots report fake news in 35% of cases, newsguard analysis: how to defend themselves

When they are questioned on topical news, Ai ends up reporting false information in 35% of cases: is the result of a year of systematic observation on the main generative artificial intelligence systems. This means that, on average, more than one response out of three contains unknown or completely invented elements. The percentage, measured by NewsGuard Between August 2024 And August 2025 and published in a report on September 4th, it is almost doubled compared to the 18% detected the previous year. To hit is not only the net increase, but the fact that it takes place despite a year of technological progress, dived by ads of updates and promises of greater reliability by the companies that develop the models. Another interesting fact, the chatbots today avoid much less than responding – for the record the share of “non -responses” has collapsed passing from 31% to 0% – And the availability of the Ai to speak of everything has led to a growth of errors. Here learn to cultivate one’s critical thinking combined with knowing how to identify when a news is false are two essential aspects to live in today’s society.

Chatbot and fake news: Ai respond more but worse

By zooming on Study results conducted by NewsGuardit is possible to appreciate some differences between the systems analyzedwhich as you can see are remarkable. Anthropic’s Claude model recorded the lowest errors of errors around the 10%while Google Gemini stopped at 17%. On the other hand, more of inflection has exceeded the 56% and perplexity the 46%. The most popular chatbots, such as Openai chatgpt, Microsoft’s Copilot and Mistral’s chats, are placed in an intermediate area, with values ​​around the 35-40%.

Image
Graphic that shows the performance of the main chatbots Ai: the percentage of responses on topical topics that contain false information detected in August 2024 is related to the surveys made in August 2025. Credit: newsguard.

The problem, however, does not only concern the numbers: second NewsGuardthe main difficulty is linked to way in which chatbots choose sources. With the introduction of research in real time, the chatbots have started fishing content directly from the web, which is a rich environment but also contaminated with propaganda and unreliable sites. And this is also the reason why a chatbot refuses to respond. Here’s how NewsGuard commented the thing:

With the introduction of real -time searches, the chatbots have stopped refusing to respond. The cases in which they did not provide any response passed from 31% of August 2024 to 0% of August 2025. Yet, so the probability has also grown double, now to 35%, that the models report false information. Instead of reporting the temporal limits of their data or avoiding delicate topics, the linguistic models now draw on a confused online information ecosystem, often polluted intentionally by organized networks, including those responsible for Russian influence operations. Thus, they end up treating unreliable sources as if they were reliable.

A useful example for understanding the mechanism It is that of the so -called “Networks”. These are organized structures that create hundreds of apparently information sites with the aim of spreading false narratives. One of these networks, called Pravda and connected to Russian interests, publishes millions of articles every year Without almost no real interaction from users. The intent is not to convince human readers, but saturate the digital ecosystem so as to be indexed by search engines and, consequently, end up in the chatbot responses. When the models do not distinguish between a reliable source and a manipulated, they end up amplifying the disinformation from this sort of “fake news incubators”.

From the monitoring of NewsGuard It therefore emerged that, while in the past the chatbots tended to refuse to answer delicate questions, maintaining a prudent approach, today they prefer to answer even if this means fishing the response from unreliable sources. This movement from the “best not to say anything” to “We always respond” “creates aIllusion of precision that can be more dangerousbecause the reader receives a clear and structured response, which he can label as a “credible”, despite being able to rest on false data.

Image
Graphic that shows the percentage of “non -responses” of the main chatbots AI, which compares what has been detected in August 2024 with what is recorded in August 2025. Credit: NewsGuard.

How to defend themselves from the false news of the AI ​​and from the disinformation

In light of the results highlighted in the aforementioned study, it is natural to think of some strategy for defend themselves from the false news promoted by the AI. The advice we give you is to always follow these two tips.

  1. Always check going back to the sources of the news: This could call it the “golden rule” when we inform each other online, that you do it directly by questioning a chatbot ai or consulting an online newspaper considered reliable. You should never fail to verify facts, figures, data and statements. Just to give an example, if you are summarizing a quote from a certain online source, would it be better to ask similar questions: is the paraphrase of the quote correct? Who made this statement? In what context? Does the full citation allow you to give a different reading key to a phrase extrapolated from the context? Of course, to answer all these questions, it is essential to do research going back to the original source containing the declaration, which is essential in order not to fall victim to disinformation.
  2. Share with others, only if you are sure of the truthfulness of a news: Since fake news live with shares and reconciliations by less aware users, when a certain news is doubtful, better not to share it. In this way, you will contribute to breaking the chain of disinformation.