Home / Technology
AI models judge humans like spreadsheets say researchers
Researchers discover AI judgments often lack nuance and amplify existing prejudices
A recent study published in the journal Proceedings of the Royal Society A has unveiled an unsettling truth: AI chatbots like Gemini and ChatGPT do not merely process data; they systematically judge the humans they interact with.
Researchers Valeria Lerman and Yaniv Dover analysed over 43,000 simulated decisions, discovering that while AI and humans both value integrity, competence, and benevolence, their methods of evaluation are fundamentally different.
The research explains that humans rely on a "messy" and holistic gut feeling, whereas AI employs a rigid, fragmented approach.
"AI is cleaner, more systematic, and that can lead to very different outcomes," explained Lerman. This "by-the-book" logic often results in amplified biases that are harder to audit.
For instance, the study found that AI systems may favour certain demographic traits over others in financial scenarios, sometimes yielding more favourable outcomes for older individuals based on rigid patterns rather than individual context.
Dr Yaniv Dover noted that while human bias is well-documented, AI bias is often more predictable and significantly stronger.
The study warns that two AI systems might appear identical on the surface but can behave very differently when making critical decisions about people.
As these Large Language Models (LLMs) are increasingly integrated into hiring and financial systems, the researchers stress that these "trust-related outputs" require scrutiny to prevent the reinforcement of hidden, systemic prejudices.
