AI anthropomorphism news language study

AI Isn’t ‘Thinking,’ Iowa State Study of 20B Words Shows

Every time a headline says ChatGPT “thinks” or AI “knows” the answer, it may be quietly selling a myth. A new Iowa State University study that scanned more than 20 billion words of global news finds AI anthropomorphism in journalism is rarer than expected, yet the stakes for public trust still run high. The findings could reshape how newsrooms and tech firms frame the tools millions now use every day.

At a Glance:

  • Iowa State team scanned more than 20 billion words of news text.
  • “Needs” paired with AI 661 times; “knows” with ChatGPT just 32.
  • Study published Nov. 29, 2025 in Technical Communication Quarterly.
  • AP Stylebook bans language that gives AI human thoughts or feelings.

Researchers Warn Against Giving AI a Human Pulse

A team of English and linguistics scholars is pushing back on the everyday habit of describing artificial intelligence with verbs built for human minds. Their paper, published Nov. 29, 2025 in Technical Communication Quarterly, tracks how reporters and columnists pair “mental verbs” with the terms “AI” and “ChatGPT” across global news.

Words like “think,” “know,” “understand” and “remember” belong to how people describe human thought. Sliding them onto software can make a pattern matching system sound like it has a pulse it does not possess. The gap between how a model actually works and how it gets written about is exactly what the new research sets out to measure.

Lead author Jo Mackiewicz, a professor of English at Iowa State University, said the habit slips in without warning, pulled along by the same reflex people use when talking to pets or cars. “We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines; it helps us relate to them,” Mackiewicz said. She added that the same shortcut risks “blurring the line between what humans and AI can do” for millions of readers who never touch the underlying code.

Inside a 20 Billion Word News Check

The research team built its evidence on one of the largest datasets in the field of linguistics. Known as NOW, the News on the Web corpus collects English language articles from 20 countries and refreshes through the current week, creating a moving picture of real news writing.

More than 20 billion words now sit inside that archive, spanning articles published from 2010 onward. Reporters, columnists and wire desks all feed the stream, giving the scholars a live snapshot of how journalists actually describe AI instead of how experts wish they would.

Mackiewicz’s group searched that pile for verbs like “thinks,” “knows,” “learns” and “needs” paired with AI terms. They found the matches are not common in daily news copy. The verbs that imply real thought turned up far less often than the researchers had expected going in.

20 billion+ words of news text scanned in the NOW corpus.

20 countries provided the English language articles sampled.

661 times the word “needs” paired with AI in coverage.

32 times “knows” paired with ChatGPT, the top verb pairing.

When the team narrowed the list, one pairing jumped out. “Needs” appeared next to AI 661 times in the corpus, while “knows” was the most frequent verb paired with ChatGPT at just 32 hits across billions of words of coverage.

Why Anthropomorphic AI Language Carries Real Risks

The stakes reach well beyond grammar rules. When a machine gets described like a person, users often start trusting it like one, and that trust can steer real money, health and safety choices.

Researchers outside Iowa State have sounded a similar alarm for years. A widely cited arXiv paper on the opportunities and risks of AI anthropomorphization tied the practice to user safety concerns, deception and misplaced reliance on chatbots.

Mackiewicz said the habit can also shift blame. Calling a chatbot “smart” can make it seem to act on its own, even though humans build, train, deploy and monetize it. That framing can quietly move responsibility away from the developers, companies and regulators steering the technology.

The Iowa State authors flagged several practical worries for everyday coverage:

  • Readers may assume AI can reason, when it only predicts likely words.
  • Clients may trust AI outputs for tasks it cannot reliably handle alone.
  • Audiences may forget the human teams who decide what a model can say.
  • Policy makers may regulate a “mind” rather than the companies behind it.

Co-author Jeanine Aune said the wording can linger long after people close the browser. “Certain anthropomorphic phrases may even stick in readers’ minds and can potentially shape public perception of AI in unhelpful ways,” Aune said.

How News Standards Shape AI Coverage

Newsroom rules may be part of the reason mental verbs stay rare in AI stories. Style guides have quietly tightened for almost three years as chatbots flooded mainstream reporting beats.

The Associated Press added a dedicated AI chapter to its Stylebook in August 2023. Nieman Lab’s breakdown of the update cites a key warning: journalists should avoid “language that attributes human characteristics to these systems, since they do not have thoughts or feelings but can respond in ways that give the impression that they do.”

“For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them,” Mackiewicz said.

Editors outside the Associated Press have echoed the same point in trainings and internal memos. Style guides at several large news groups now flag gendered pronouns, human names and emotion verbs for machines. The Iowa State team said those tightening industry standards may help explain the low counts inside their scan.

What the Spectrum of AI Language Means for Readers

Not every mental verb is a red flag, the study noted. Context, not just the vocabulary on the page, often decides the damage.

For instance, “needs” often described simple inputs, as in “AI needs large amounts of data.” The authors said that reads the same way as saying a car “needs” gasoline or a recipe “needs” flour, with no human inner life implied.

Other phrases push much further. Statements like “AI needs to understand the real world” can hint at human reasoning, ethics or even awareness. Aune said that is where the spectrum gets slippery and reader perception starts to bend toward science fiction.

PhraseAnthropomorphism LevelWhat It Suggests
“AI needs large data”LowBasic input, like fuel for a machine
“AI needs to be trained”ModerateAction by humans, passive voice
“AI needs to understand the world”HighImplies human style awareness

The authors said the takeaway is simple but overlooked. Word choice is not a cosmetic fix; it changes what the public believes AI can safely do. As Aune put it, anthropomorphizing “isn’t all-or-nothing and instead exists on a spectrum.”

Frequently Asked Questions

What did the Iowa State University study on AI language find?

The team found that news writers rarely pair “AI” or “ChatGPT” with mental verbs, and when they do, the usage is not always truly anthropomorphic.

Why is anthropomorphizing AI a problem?

It can make systems seem more independent or conscious than they are, which can fuel overtrust and shift blame away from the humans who build them.

What is the NOW corpus used in the study?

NOW stands for News on the Web. It collects more than 20 billion words of English language news from 20 countries, updated through the current week.

What does the AP Stylebook say about AI language?

AP warns journalists to avoid language that gives AI systems human characteristics, since the tools do not have thoughts or feelings.

What are the most common mental verbs paired with AI in the news?

“Needs” was the top pairing with AI at 661 hits, while “knows” was the top verb paired with ChatGPT at just 32 hits.

Language shapes public trust as much as data and policy do. The Iowa State audit of more than 20 billion words suggests most reporters already dodge the worst AI anthropomorphism traps, yet the edges of the spectrum still matter for public understanding. Every verb in an AI story is a tiny vote on what these tools really are. Share your take in the comments.