Gossip Herald

Home / Technology

Why do AIs hallucinate? Shocking new study reveals the truth

OpenAI, Georgia Institute of Technology's study has revealed why AI chatbots often 'hallucinate'

By GH Web Desk |
Why do AIs hallucinate? Shocking new study reveals the truth
Why do AIs hallucinate? Shocking new study reveals the truth

A new study from OpenAI and the Georgia Institute of Technology has revealed why artificial intelligence (AI) chatbots often “hallucinate,” confidently presenting false information as fact.

The researchers have argued that these errors stem not from bad data, but from how AI models are trained to reward confidence over accuracy.

Large language models (LLMs) such as ChatGPT are designed to predict the next word in a sequence; however, during training, they’re evaluated using benchmarks that favour bold answers instead of honest uncertainty.

“They’re rewarded for guessing, not for saying ‘I don’t know,’” the researchers noted.

Even with perfect data, some questions are inherently unanswerable, meaning hallucinations are mathematically inevitable.

“The reality is we won’t ever reach 100% accuracy. But that doesn’t mean language models must hallucinate,” a research scientist at OpenAI, Adam Kalai, noted.

The researchers call for a redesign of industry benchmarks that would penalise incorrect guesses and reward self-awareness.

According to them, this change could teach AI systems humility and reduce their tendency to bluff.

However, experts remain skeptical. Some have warned that encouraging models to say “I don’t know” could make them too cautious, alienating users who expect confident answers.

“If LLMs keep pleading the Fifth, they can’t be wrong, but they’ll also be useless,” Subbarao Kambhampati of Arizona State University stated.

The development comes as OpenAI transitions into a public benefit corporation (PBC) worth over $500 billion. But the challenge of balancing truthfulness with user engagement remains one of AI’s biggest unsolved dilemmas.