BETA — Сайт у режимі бета-тестування. Можливі помилки та зміни.
UK | EN |
LIVE
Наука 🇺🇸 США

Think AI "knows" what it’s doing? Scientists say think again

Science Daily 1 переглядів 10 хв читання
Science News from research organizations Think AI "knows" what it’s doing? Scientists say think again Date: April 19, 2026 Source: Iowa State University Summary: Calling AI things like “smart” or saying it “knows” something might sound harmless, but it can quietly mislead people about what AI actually does. A new study shows that news writers are more careful than expected, rarely using strongly human-like language. When they do, it often falls on a spectrum—sometimes describing simple requirements, other times hinting at human traits. Share: Facebook Twitter Pinterest LinkedIN Email FULL STORY
Does AI Really "Know" Anything
The way we talk about AI might be quietly convincing us it’s more human than it is. Credit: Shutterstock

Think, know, understand, remember.

These are everyday words people use to describe what goes on in the human mind. But when those same terms are applied to artificial intelligence, they can unintentionally make machines seem more human than they really are.

"We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines -- it helps us relate to them," said Jo Mackiewicz, professor of English at Iowa State. "But at the same time, when we apply mental verbs to machines, there's also a risk of blurring the line between what humans and AI can do."

Mackiewicz and Jeanine Aune, a teaching professor of English and director of the advanced communication program at Iowa State, are part of a research team that studied how writers describe AI using human-like language. This type of wording, known as anthropomorphism, assigns human traits to non-human systems. Their study, "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," was published in Technical Communication Quarterly.

The research team also included Matthew J. Baker, associate professor of linguistics at Brigham Young University, and Jordan Smith, assistant professor of English at the University of Northern Colorado. Both previously studied at Iowa State University.

Why Human-Like Language About AI Can Be Misleading

According to the researchers, using mental verbs to describe AI can create a false impression. Words such as "think," "know," "understand," and "want" suggest that a system has thoughts, intentions, or awareness. In reality, AI does not possess beliefs or feelings. It produces responses by analyzing patterns in data, not by forming ideas or making conscious decisions.

Mackiewicz and Aune also pointed out that this kind of language can overstate what AI is capable of. Phrases like "AI decided" or "ChatGPT knows" can make systems seem more independent or intelligent than they actually are. This can lead to unrealistic expectations about how reliable or capable AI is.

There is also a broader concern. When AI is described as if it has intentions, it can distract from the humans behind it. Developers, engineers, and organizations are responsible for how these systems are built and used.

"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," Aune said.

How News Writers Actually Use AI Language

To better understand how often this kind of language appears, the researchers analyzed the News on the Web (NOW) corpus. This massive dataset contains more than 20 billion words from English-language news articles published in 20 countries.

They focused on how frequently mental verbs such as "learns," "means," and "knows" were used alongside terms like AI and ChatGPT.

The findings were unexpected.

Mental Verbs Are Less Common Than Expected

The study found that news writers do not frequently pair AI-related terms with mental verbs.

While anthropomorphism is common in everyday speech, it appears far less often in news writing. "Anthropomorphism has been shown to be common in everyday speech, but we found there's far less usage in news writing," Mackiewicz said.

Among the examples identified, the word "needs" appeared most often with AI, showing up 661 times. For ChatGPT, "knows" was the most frequent pairing, but it appeared only 32 times.

The researchers noted that editorial standards may play a role. Associated Press guidelines, which discourage attributing human emotions or traits to AI, could be influencing how journalists write about these technologies.

Context Matters More Than the Words Themselves

Even when mental verbs were used, they were not always anthropomorphic.

For instance, the word "needs" often described basic requirements rather than human-like qualities. Phrases such as "AI needs large amounts of data" or "AI needs some human assistance" are similar to how people describe non-human systems like cars or recipes. In these cases, the language does not imply that AI has thoughts or desires.

In other cases, "needs" was used to express what should be done, such as "AI needs to be trained" or "AI needs to be implemented." Aune explained that these examples were often written in passive voice, which shifts responsibility back to human actors rather than the technology itself.

Anthropomorphism Exists on a Spectrum

The study also showed that not all uses of mental verbs are equal. Some phrases move closer to suggesting human-like qualities.

For example, statements like "AI needs to understand the real world" can imply expectations tied to human reasoning, ethics, or awareness. These uses go beyond simple descriptions and begin to suggest deeper capabilities.

"These instances showed that anthropomorphizing isn't all-or-nothing and instead exists on a spectrum," Aune said

Why Language Choices About AI Matter

Overall, the researchers found that anthropomorphism in news coverage is both less frequent and more nuanced than many might assume.

"Overall, our analysis shows that anthropomorphization of AI in news writing is far less common -- and far more nuanced -- than we might think," Mackiewicz said. "Even the instances that did anthropomorphize AI varied widely in strength."

The findings highlight the importance of context. Simply counting words is not enough to understand how language shapes meaning.

"For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them," Mackiewicz said.

The research team also emphasized that these insights can help professionals think more carefully about how they describe AI in their work.

"Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI," the research team wrote in the published study.

As AI continues to develop, the way people talk about it will remain important. Mackiewicz and Aune said writers will need to stay mindful of how word choices influence perception.

Looking ahead, the team suggested that future studies could explore how different words shape understanding and whether even rare uses of anthropomorphic language have a strong impact on how people view AI.

Story Source:

Materials provided by Iowa State University. Note: Content may be edited for style and length.

Journal Reference:

  1. Jeanine Elise Aune, Matthew J. Baker, Jo Mackiewicz, Jordan Smith. Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT. Technical Communication Quarterly, 2025; 1 DOI: 10.1080/10572252.2025.2593840

Cite This Page:

Iowa State University. "Think AI "knows" what it’s doing? Scientists say think again." ScienceDaily. ScienceDaily, 19 April 2026. <www.sciencedaily.com/releases/2026/04/260417224505.htm>. Iowa State University. (2026, April 19). Think AI "knows" what it’s doing? Scientists say think again. ScienceDaily. Retrieved April 19, 2026 from www.sciencedaily.com/releases/2026/04/260417224505.htm Iowa State University. "Think AI "knows" what it’s doing? Scientists say think again." ScienceDaily. www.sciencedaily.com/releases/2026/04/260417224505.htm (accessed April 19, 2026).

Explore More

from ScienceDaily RELATED STORIES Feeling Mental Exhaustion? These Two Areas of the Brain May Control Whether People Give Up or Persevere July 7, 2025 — When you're mentally exhausted, your brain might be doing more behind the scenes than you think. In a new study using functional MRI, researchers uncovered two key brain regions that activate ... Readers Trust News Less When AI Is Involved, Even When They Don't Understand to What Extent Dec. 9, 2024 — Researchers have published two studies in which they surveyed readers on their thoughts about AI in journalism. When provided a sample of bylines stating AI was involved in producing news in some way ... Climate Change Policies Lose Popularity When Combined With Pausing Regulations or Social Justice Mar. 27, 2024 — Legislators love bundling things together. It lets them accomplish more with less hassle and attempt to make legislation more appealing to a broader group. But a new study suggests that this can ... When Languages Collide, Which Survives? Nov. 14, 2023 — Researchers incorporate language ideologies, along with the impact of interaction between individuals with opposing preferences, on the language shift process. The team chose a quantitative approach ... Brain Inspires More Robust AI Sep. 16, 2023 — Most artificially intelligent systems are based on neural networks, algorithms inspired by biological neurons found in the brain. These networks can consist of multiple layers, with inputs coming in ... Verbal Nonsense Reveals Limitations of AI Chatbots Sep. 14, 2023 — The era of artificial-intelligence chatbots that seem to understand and use language the way we humans do has begun. Under the hood, these chatbots use large language models, a particular kind of ... TRENDING AT SCITECHDAILY.com

Scientists Uncover Potential Brain Risks of Popular Fish Oil Supplements

This Simple Blood Test Could Outperform “Bad Cholesterol” in Preventing Heart Disease

Scientists Discover a Surprising Way To Make Bread Healthier and More Nutritious

New Research Uncovers Hidden Side Effects of Popular Weight-Loss Drugs

Поділитися

Схожі новини