Gossip Herald

Home / Technology

ChatGPT, Gemini, and Copilot spreading misinformation in majority of news responses

Google Gemini performed worst, with 72% of responses showing major issues

By GH Web Desk |
ChatGPT, Gemini, and Copilot spreading misinformation in majority of news responses
ChatGPT, Gemini, and Copilot spreading misinformation in majority of news responses

A new study has revealed that artificial intelligence (AI)-powered news assistants are providing misleading and inaccurate information in almost half of their responses.

According to a research conducted by the European Broadcasting Union (EBU) and the BBC, nearly 45% of answers generated by leading AI assistants contained major factual errors.

The study analysed 3,000 news-related responses from AI chatbots, including OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity, testing them for factual accuracy, credibility of sources, and the ability to distinguish fact from opinion.

Covering 14 languages, the report found widespread inconsistencies across platforms, raising concerns about the growing dependence on AI tools for news consumption.

The study showed that 81% of the responses had at least one issue, while a third contained serious sourcing errors, including missing, misleading, or incorrect attributions.

Notably, Google’s Gemini performed the worst, with 72% of its responses showing major sourcing problems, far higher than any other AI assistant tested.

Researchers have warned that the rapid rise of generative AI could blur the line between verified journalism and fabricated content.

In response, companies behind these tools have acknowledged the challenge.

OpenAI and Microsoft have previously admitted that “hallucinations” remain a persistent issue.

Meanwhile, Perplexity claims its “Deep Research” mode achieves 93.9% factual accuracy, while Google noted that it welcomes user feedback to improve Gemini’s reliability.