Home / Technology
Study reveals news writers avoid humanising artificial intelligence language
Research team highlights the importance of choosing words that accurately reflect AI as tools
Professor Jo Mackiewicz and research associate Jeanine Aune from Iowa State University have published a significant study examining the language used to describe artificial intelligence.
The report, titled "Anthropomorphising Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," was released in the journal Technical Communication Quarterly.
It investigates "anthropomorphism," the practice of assigning human traits to non-human systems through "mental verbs" such as "think," "know," or "remember."
Analysing the News on the Web corpus, which contains 20 billion words, the researchers found that news writers are more cautious than previously assumed.
Although everyday speech frequently treats machines as sentient beings, professional journalists rarely pair AI terms with human-like descriptors.
Professor Mackiewicz noted that while using such language helps people relate to technology, it risks "blurring the line" between human capability and machine processing.
The study suggests that when writers do use anthropomorphic phrasing, it exists on a spectrum ranging from simple technical requirements to deeper hints of human consciousness.
Aune warned that certain phrases might "stick in readers' minds" and shape public perception in unhelpful ways.
The findings encourage professionals to remain mindful of their vocabulary to ensure readers understand that AI systems are tools rather than thinking entities.
Ultimately, the research highlights that the language chosen by communicators shapes the broader understanding of AI and the humans responsible for its development.
Moving forward, Mackiewicz and Aune emphasise that staying mindful of these linguistic nuances will be essential as technology continues to evolve alongside human society.
