You Can Learn a Lot About Someone’s Mind From the Way They Talk

From The Observatory
The Observatory » Area » Language

Scientists are uncovering how the hidden effort of talking affects everything from everyday conversations to spotting deception and fake news.

This adapted excerpt is from Maryellen MacDonald’s More Than Words: How Talking Sharpens the Mind and Shapes Our World (2025, Penguin Random House). It is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0) with permission from Penguin Random House. It was adapted and produced for the web by the Independent Media Institute.
You Can Learn a Lot About Someone’s Mind From the Way They Talk” by Maryellen MacDonald is licensed by the Observatory under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). For permissions requests beyond the scope of this license, please see Observatory.wiki’s Reuse and Reprint Rights guidance.Published: September 23, 2025 Last edited: September 23, 2025
Articles with Similar Tags
Authors in Language
Related Reads
BY
Maryellen MacDonald is a cognitive scientist who focuses on psycholinguistics.
SOURCE

Introduction

Sherlock Holmes, the famous fictional detective, believed that his brother, Mycroft, could also have been a brilliant solver of crimes if he had not been so impossibly lazy. Mycroft was large and slow-moving, and he avoided all unnecessary exertion. He enjoyed sitting by a window and making ingenious deductions from his close observations of the people and activities he saw outside. But rousing himself to leave home and investigate some far-flung crime scene? That was not for him.

Mycroft did venture out to visit his beloved Diogenes Club across the street from his London apartment. The Diogenes Club was in most ways a typical British gentlemen’s club, stocked with many of the usual trappings to please a Victorian gentleman—drink, food, servants, comfy chairs in which to puff a cigar and read the newspaper. It deviated from other clubs in one glaring respect. Mycroft and the other club founders established a rule prohibiting an activity that was a mainstay of other gentlemen’s clubs. The rule was so strict that members who disobeyed it were expelled. What was this activity that Mycroft Holmes so deeply wanted to avoid?

Talking. The Diogenes Club forbade talking. It’s not entirely clear why the club founders made this rule, but I think I know why. Mycroft Holmes, deep thinker, exquisite observer, highly averse to expending effort, had made a brilliant deduction: Talking is hard work.

Talking is Hard

Talking is not physically demanding in the same way as running a marathon, but our brains are working much harder when we speak than when we’re reading or understanding someone else in conversation. We don’t notice the effort of talking because what’s going on in our brains is mainly hidden from us. We’re aware of what we say, and what other people are saying, and occasionally we’re aware when something goes wrong and there’s a misunderstanding. Most of the time, though, our brains don’t reveal what they’re doing or how much they’re working.

It’s challenging to measure mental effort because it’s hidden in the brain, but researchers have figured out some tricks to get a better sense of what’s going on. There’s an ingenious technique that doesn’t require assessing the difficult-to-measure activity, like the hidden mental work of talk planning. Instead, it asks how much this mental activity interferes with simultaneously doing a second activity that’s much easier to measure.

If talking is difficult and we make some research participants do something else while talking, then people should be slow on this second task or make lots of errors doing it, because their brains are being consumed with the demands of talking. And if comprehension is easier than talking, then people should be better at simultaneously doing this other activity while understanding another talker.

My favorite example of this method was developed by Amit Almor, who was previously a postdoctoral researcher in my lab. Now he’s a professor at the University of South Carolina.

Almor and his graduate students used the talking-plus-another-task technique to measure the difficulty of both speaking and comprehending someone else. They invited undergraduate students to bring a friend to the lab to have a conversation. In the experiment, the pair of friends sat down across from each other, each in front of a computer. The friends were told that they could talk to each other about whatever they wanted, but one of them was given a second job to do. The friend with the second job would see a moving dot on their computer screen, and they had to use the computer mouse to keep the cursor as close as possible to the dot. Keeping the cursor on the dot was tricky because the dot sometimes changed speed and direction unpredictably. The friend had to do this dot-tracking while still being involved in the conversation.

Almor’s research team used computer software to measure how many screen pixels separated the moving dot and the cursor at any time, which gave them an exact measure of when the dot-tracking was going well and when it wasn’t. They calculated the accuracy of dot-tracking at many different conversation points—when the tracker was speaking versus listening, starting to speak, finishing their turn in the conversation, listening to the start of the other person’s speech, and listening when the speaker was winding down.

You can probably predict the first big result: Overall, folks were much worse at dot-tracking while they were speaking than while they were listening. But the experiment also revealed interesting patterns of how difficulty waxes and wanes during a conversation. On the talking side, beginning to talk about something is tough, resulting in relatively poor performance on dot-tracking. Talking gets much easier when the speaker is close to finishing what they were saying.

Why? Because when we first start to say something, talking already requires us to do two things at once. We are doing all the muscle movements to get the first words out, and we’re simultaneously doing the mental processes of planning what’s going to come out next. Toward the end of whatever the speaker is saying, it becomes much easier because there’s nothing left to plan, and the speaker just has to say the final words.

Meanwhile, the listener has a relatively easy job early on, but when the speaker is finishing up, difficulty for the listener spikes, and their dot-tracking tanks. Again, why? It’s because the listener senses that the speaker is getting ready to finish what they wanted to say, and it’s soon going to be time to reply. Now suddenly, the listener, who’s soon to become the one talking, has several jobs to do at once. They must continue to understand what their friend is saying, start planning their reply, and also figure out exactly when it’s their turn to talk.

Reply planning begins early, even as the other person is still talking, because waiting until they finish would result in a long, awkward pause while the listener figures out what to say. Indeed, devising the plan for what is about to be said—picking the words, putting them in order, arranging their pronunciation—usually turns out to be the most challenging component of talking, in part because planning is often being done while also listening or speaking.

If talking makes it hard to follow a dot on a screen, imagine what it can do to driving a car. Many states prohibit driving while texting or talking on a cell phone. Most people assume these laws exist because drivers may take their eyes off the road and their hands off the wheel while using a cell phone. Those are very good reasons for prohibiting cell phone use while driving, but that’s only part of the story. Studies using a driving simulator rather than a dot-tracking task show similar results: Simply speaking aloud impairs driving, and vice versa.

Lying While Talking (and Detecting Lies) is Even Harder

People sometimes lie. Maybe you’ve even tried it yourself. Of course you have, because everyone does at least some minor lying. Perhaps you’ve claimed that another engagement prevents you from going to your cousin’s barbecue, or you’ve called in sick on a workday packed with boring meetings, or you’ve told your sister that you’re not upset at her, not one bit.

Part of the reason that we produce these small-scale lies is that we can get away with them. That’s another way of saying that the people we’re lying to often aren’t sure whether we’re lying or not. But language analyses might not be so easily fooled.

Left to our own intuitions, we are terrible at detecting lies. Several careful studies of lie detection have shown that we’re often no better than random guessing in distinguishing truth and lies in all kinds of situations. We struggle to distinguish between genuine and fake opinions, real and fake news, honest and untruthful reports in crime investigations, and authentic and counterfeit reviews of merchandise posted online.

Imagine for a moment that you’re on trial for a crime, that witnesses are testifying, and that twelve earnest people in the jury box are doing their job, listening carefully. Unfortunately, those jurors may be incapable of figuring out which witnesses are telling the truth.

We’re in big trouble if detectives, prosecutors, judges, and jurors can’t tell a truthful witness report from lies, but the problems don’t stop there. Fake news and other lies have led many people to try crackpot remedies for diseases or reject scientifically supported medical care, sometimes with fatal consequences. False advertising and scams routinely defraud people of their life savings. All over the world, governments use disinformation to control citizens in authoritarian regimes and to influence citizens of foreign countries, with the United States being a prime target of disinformation campaigns from overseas.

It’s no wonder that we can develop feelings of cynicism and helplessness around determining what’s true, eroding our faith in science, medicine, government, media, and each other, even in situations in which objective evidence is available.

Several education programs have been launched to train high school and college students to identify false information better. Some of these programs have failed spectacularly: In the process of exposing the students to false statements in the service of training them to distinguish truth and lies, the students don’t just fail to improve; they get worse. The exposure to fake information during training makes it seem more familiar and therefore more true. At least with the programs available so far, we’re not going to train ourselves out of this problem anytime soon. That’s why some people look to technology for help in sorting truth from lies.

The first device to offer some improvement in lie detection was the polygraph machine. You’ve likely seen these in old movies and detective shows. An interviewer asks someone a series of questions, and physiological measures—the interviewee’s heart rate, breathing, and so on—help the trained polygraph examiner to judge whether the person is lying or telling the truth.

Alas, many studies have concluded that polygraph-assisted lie detection is not highly accurate and is subject to biases by the human polygraph operator. That’s why polygraphs are associated with old movies—they’re less commonly used today. Polygraphs also have the disadvantage that the person being examined must be physically present, wearing all the measurement devices. That’s unfortunate, because there are endless situations where we want to know the truth of something said on the phone, on social media, in a trial, in a public debate, or in a podcast or TV interview.

Computer analysis of transcripts of speech or writing might fill some of the need for better lie detection. These analyses don’t require the talker to be present, and they don’t require any special equipment to be worn by the talker.

There are two general ways in which a person’s talk might reveal their truthfulness. The first idea is that truthful thoughts might slip out while someone is telling lies. This theory arose over a century ago, proposed by Sigmund Freud, the father of psychoanalysis. Freud believed that errors during talking could reveal our unconscious thoughts, as when someone accidentally says the truth, “I had to kill them,” when they had intended to lie and say, “I had to save them.” If Freud is right about unconscious thoughts leaking into our talk, we might look at errors like these for signs of lying.

Well, it’s not that simple. Applying what we know about how we plan our talk, this “kill them” error likely reflects the word kill being pulled out of long-term memory instead of save. Yes, this mistake in memory retrieval might indicate the talker’s failed intent to suppress the fact that they have killed someone. But other possibilities are also likely. Kill might have come up from memory because the person contemplated saying the truthful statement, “I had to save them from being killed.” Or the error might arise for some other reason; talkers make word substitution errors like these all the time, without any hidden thoughts.

If I accidentally say, “Turn right,” instead of “Turn left,” or “I’ll chop the onions,” when I mean that I’m chopping the garlic, I’m not lying or revealing any unconscious obsessions with garlic or left turns. Errors like these happen regularly because the process of retrieving just the right words from memory is complex and imperfect. Mistakes happen, and we can’t use them as clear evidence of lying.

If we can’t rely on errors to reveal the truth in our conversations, we need to explore a second method: the fact that talking is hard work. Remember that it can be quite challenging to talk while doing something else, such as driving or tracking a moving dot on a computer screen. Talking while lying is also doing several things at once: Someone who’s telling a story truthfully is using a memory of real events to drive their words and other aspects of planning what to say, but a liar is simultaneously planning their talking while also trying to make up something they didn’t experience and hide what they want to keep hidden.

The liar’s extra difficulty can be reflected in their talk. The liar’s story, which drives the words they choose, is likely going to be less detailed than when someone speaks truthfully, because the liar doesn’t have the real memory to draw on. The liar may also be trying to suppress reference to some real events but mix other events into their story, all while generally trying to sound like they’re telling the truth. Talking while lying makes for a cocktail of difficulty, and the consequence may be that a liar’s talk is not like the truth teller’s.

Researchers have tested these ideas by randomly assigning people to either tell the truth or lie about something—such as a specific event, their opinion on a controversial issue, and so on. The researchers then analyzed patterns in speech or writing to search for features that distinguish the people who were told to lie from the ones who were told to tell the truth. In some cases, other folks were asked to read the statements and try to identify which ones were true and which were lies.

These studies have turned up several interesting results. Once again, human judges weren’t any better than random guessing at detecting true versus false statements. Computer analyses of the language of truthful and lying talkers showed better results. The language analyses correctly assigned two-thirds of the written and spoken statements to the “truth” and “lie” categories. That’s still far from perfect, but it’s much better than we can do on our own.

As usual, the computer analyses are making their guesses based on a large pile of cues that they uncover, but we can look at some of the significant patterns. Overall, truthful texts had more complex language than the lies. This difference played out in several ways, including the factual statements having more abstract thought-related words, such as the verbs think and believe. The factual statements also had more words indicating nuance in beliefs, like except and without.

False statements often used very common verbs, such as “go,” and provided little discussion of the nuances of any opinion. A liar’s fake message is sketchy compared to someone’s real report, and the phony message can’t generate rich descriptive words, leaving the liar’s story limited to generalities.

Lying on a Larger Scale: Fake News

Researchers are trying to take these controlled experiments, in which they know who is lying and who is not, out into the real world, to address issues such as detecting fake news. Fake news is typically written by professional fake-news writers rather than everyday people instructed to lie, as in the study we just saw. Fake news might therefore be harder to detect than lies generated on command in an experiment.

The difficulty in spotting fake news is a serious problem. Even provably false statements can persuade some people, and all of us are likely tempted to believe some fake news, given our overall low skill in detecting lies.

Computer algorithms trained to distinguish real and fake news have found that the counterfeit reports tend to be more surprising than real news. Most real news has some sameness to it—a new law was passed, the on-ramp to Highway 12 is closed for road repairs, travelers are experiencing holiday flight delays, same old stuff. Real journalists file reports on important events even if they are not particularly surprising, but fake news is designed to grab our attention. On average, fake news is more surprising and unusual than the real thing. Whereas unprofessional liars produce lies that are vague and neutral, the professional lies of fake news go for the attention-grabbing jugular.

Fake-news creators also include emotional commentary that manipulates the reader’s emotions. All of this interesting, emotionally engaging, and fake text can be churned out faster than real news can be reported, because fake news doesn’t require time-consuming editing, fact-checking, or other activities of objective journalism. It can even be generated by artificial intelligence systems that have been trained on the language patterns in fake news stories.

Fake news that includes photos tends to have better pictures than real news, as real news photographers are limited by the real people or scenes related to the story. In contrast, fake news generators can scour stock photo sites or manipulate photos to create something eye-popping.

The high levels of interest in and emotional engagement with fake news are undoubtedly part of the problem that fake news poses. A study of fake news on X (formerly Twitter) showed that it’s retweeted and reposted far more often than real news, spreading six times as fast by some measurements. And we can’t blame automated bots for this problem; the rapid spread of fake news appears to be due to humans retweeting it, not computerized bots. And consider what research studies have found about hate speech—the more often someone produces an opinion, the more strongly they believe it. We can expect the same regarding the repetition and reposting of fake news.

I’d like to be able to tell you that high-powered computer analyses are powerful allies in the fight against fake news, but the truth is much more sobering. There is intensive ongoing research on computer models for fake-news detection, and the field is rapidly evolving, but as I’m writing this, the many approaches being tested are not really doing that well.

Good detection of fake news requires not only knowledge of language patterns but also awareness of events going on in the world, which change every day. Fake news appears in various venues with distinct characteristics, including “news” articles, Facebook posts, blog posts, podcast discussions, fake videos, and more. Different approaches may be needed for various types of fake news. And alas, there is so much of it, and it spreads so quickly.

Stop to Consider the Following

Having read this far, you may be wondering whether you could use some of the information revealed by computer analyses of language to improve your own interactions with people or media. The computers are crunching enormous amounts of data and weighing thousands of possible cues. That’s more than we humans could ever detect, especially in the middle of a conversation. But could you make meaningful gains in your analysis of others by attending to just a few of the major talking patterns?

Maybe. I think your best bet is to use the information reported here to turn a skeptical eye to possible fake news—compared to real news, the fake stuff is more surprising, interesting, and emotionally engaging, with more pretty pictures and more viral spread. Fake news is hard to detect, but folks who are more analytic—pausing to think about what’s plausible versus what’s illogical—are better at spotting it than those who just go with their gut instinct. Someone who combines this analytical approach with the knowledge that fake news tends to be surprising and emotional may gain an additional edge in fake-news detection.

What we do get from this discussion of computer-aided analyses of talk is a sense of promise and pitfalls for the future. The computer-aided analyses are performing well and are expected to continue improving. They’re going to be applied more broadly. They might struggle with fake news for some time, but perhaps they’ll improve at identifying individuals who spout hate speech and are most likely to act on it.

If talk patterns could help law enforcement predict probable future hate crimes, those predictions could be the basis for important crime prevention attempts, including using the predictions to support legal injunctions against the hate talker, barring them from contact with the group they’re targeting, or using red flag laws that revoke the hate talker’s right to own guns.

Despite these potential benefits coming down the road, you do want to think about your views on having your own talk analyzed. How do you feel about the lack of privacy you’re inviting when you sign up for anything that might be tracking and storing your talk?

A majority of Americans believe that social media companies are not sufficiently protecting their privacy. Most likely, they are concerned about data breaches where private information is collected by unauthorized groups, such as hackers accessing bank account details or passwords. That’s definitely a concern. But we should also recognize that our talk is being analyzed to reveal information about us, and our acceptance of user agreements has likely given companies free rein to do so.

Given the almost daily advances in artificial intelligence and the current rush of products to market, we really don’t know what the upper limit will be on the success of talk analyses. We also don’t know the upper limit on where they’ll be applied or how much they will impact our lives. In the wrong hands, these algorithms could be used to attack free speech or discriminate against those who are legitimately questioning authority.

Collectively and individually, we should keep informed about both the promise of this technology and the safeguards that are needed to prevent abuse.