Home » Mental Verbs in Media Make AI Seem Human, Study Finds

Mental Verbs in Media Make AI Seem Human, Study Finds

Research reveals anthropomorphic language inflates AI's perceived abilities and shifts responsibility away from developers

by Editor
0 comments

“That’s what ChatGPT knows.” “Claude thinks this way.”

A study has found that such casual expressions used in daily life make people perceive artificial intelligence (AI) as more human-like than it actually is. The moment mental verbs like “thinks,” “knows,” “understands,” and “remembers” are attached to AI, people begin to misunderstand machines as having beliefs and intentions.

The research team from Iowa State University in the U.S. published these findings in the international academic journal *Technical Communication Quarterly*.

According to the team, such expressions create misunderstandings in two ways. First, they inflate AI’s capabilities beyond reality. Phrases like “AI decided” or “ChatGPT knows” make the system appear autonomous and intelligent, excessively raising trust and expectations. However, most AI services based on large language models (LLMs) are merely tools that analyze data patterns to produce results—they do not think or judge independently.

The second issue is that they obscure accountability. Describing AI as if it has intentions hides the roles of developers and companies that design, train, deploy, and manage the system. The team stated, “Anthropomorphic expressions can distort the responsibility structure surrounding AI by lingering in readers’ perceptions.”

How often do media outlets use such expressions? The research team analyzed the *News on the Web (NOW)* dataset, which includes over 20 billion words from English-language articles in 20 countries. The results showed that media generally uses restrained language. When AI was the subject, the most common mental verb was “needs” (661 instances), while “knows” appeared only 32 times when “ChatGPT” was the subject. The team attributed this to editorial guidelines from major media outlets, such as the Associated Press’s recommendation to avoid attributing human emotions or traits to AI.

The same expression can carry different meanings depending on context. The phrase “AI needs vast amounts of data” is not significantly different from describing the conditions for a car or a recipe. In contrast, “AI needs to understand reality” creates an effect where humans project reasoning, ethics, or consciousness onto machines. The same expression impacts perceptions of AI differently based on context. The team noted, “The language media chooses quietly shapes readers’ understanding of AI, expectations for technology, and their grasp of the humans behind it.”

 

 

Originally written by: Choi Won-woo

Source: The Chosun Daily

Published on: 20 April 2026

Link to original article: Mental Verbs in Media Make AI Seem Human, Study Finds

You may also like

Leave a Comment