Spotting Conspiracy Talk: A Linguistic Guide for the Digital Age

From The Observatory
The Observatory » Area » Language

Learning to spot the language of conspiracy theories is a key skill for navigating digital media.

This article was produced for the Observatory by the Independent Media Institute.
Spotting Conspiracy Talk: A Linguistic Guide for the Digital Age” by Danica Tomber is licensed by the Observatory under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0). For permissions requests beyond the scope of this license, please see Observatory.wiki’s Reuse and Reprint Rights guidance.Published: January 21, 2026 Last edited: January 21, 2026
Guides
Articles with Similar Tags
Authors in Language
Related Reads
BY
Danica Tomber is an applied linguist.
SOURCE

Editor’s Note: This article, by nature of the topic, may include language that is considered sensitive and/or vulgar to some readers.

While we may be most familiar with modern-day conspiracy theories about government intelligence, unidentified flying objects, anti-vaccination, COVID-19, and more, conspiracy theories have existed for centuries.

In July of AD 64, the great fire of Rome destroyed two-thirds of the city-state—which had a population of around 2 million—resulting in widespread death and homelessness. Despite several contributing factors such as intense summer heat, dry winds, and the prevalence of wooden houses, various conspiracy theories emerged, ranging from Emperor Nero “deliberately” starting the fire to “blaming” the Christian community for it.

When the fire began, Nero was in a different city altogether, leading to claims that he had conspired to bring about the catastrophe to rebuild Rome. Coincidentally, the Domus Aurea, an extravagant palatial project, was constructed on a portion of the ruins, which fueled conspiracy theories about his alleged involvement, despite evidence that he also supported relief efforts. In turn, Nero placed the blame on Christians, resulting in the crucifixion and burning alive of many from the religious community. There is also evidence that some Christians believed in prophecies about a forthcoming catastrophic fire in Rome at the time.

While the actual cause of this infernal disaster remains unproven, it illustrates how conspiracy narratives can arise from social crises. Psychologically, fear and uncertainty can be strong motivators, and conspiracy theories empower people by filling a knowledge gap. Socially, tribalism and in-group/out-group tendencies may also contribute to the birth of such theories; a united front can foster a sense of control. These psychosocial explanations reveal that we want to feel in control, and information is power.

Conspiracy in the Digital Age

Unlike in Ancient Rome, our everyday interactions with cyberspace make it easy to access and spread information. Now, more than two decades into the rise of Web 2.0, it might be assumed that social media has led to an increase in conspiracy theories and in the number of people who believe in them, but this may not necessarily be the case. Although evidence is mixed on whether social media has caused a population-wide increase in conspiracy beliefs over time, multiple studies show that people who rely heavily on social media for news are more likely to endorse conspiracy theories than those who do not. A 2022 multi-study report in the journal PLOS One still cautions “not to dismiss the availability of conspiracy theories online, the large numbers of people who believe in some conspiracy theories, or the potential consequences of those beliefs.”

Researcher Keith Raymond Harris echoes this concern in a 2022 article in the Conversation, arguing that even with “very small numbers of true believers [in conspiracy theories], the high visibility of these false ideas can still make them dangerous.” Step Together, an Australian support service committed to ending violent extremism, cautions that the dangers of conspiracy theories include provoking social conflict, increasing prejudices, spreading fear, and eroding trust.

Given the accessibility and visibility of information that the internet affords us, we need to develop digital media literacy that empowers us with critical thinking tools to evaluate the credibility of information.

What Exactly is a Conspiracy Theory?

Before learning how to identify a conspiracy theory, it is important to understand what it is and is not. While the term “conspiracy theory” may sometimes be used in a generic sense to describe a false narrative that promotes misinformation or disinformation, such as fake news, this is an overgeneralized and inaccurate description.

For one, conspiracies can be real or unreal. The Watergate scandal is an oft-cited example of a government conspiracy that did take place.

Michael Shermer, science historian and educator, in distinguishing between conspiracy and conspiracy theory, defines a conspiracy as “two or more people plotting or acting in secret to gain an advantage or to harm others immorally or illegally” and a conspiracy theory as “a structured belief about a conspiracy, whether it’s real or not.”

A 2022 meta-analysis of conspiracy theories in online spaces characterizes these beliefs as “unique epistemological accounts that refute official accounts and instead propose alternative explanations of events or practices by referring to individuals or groups acting in secret.”

More broadly, conspiracy theories are a social phenomenon that uses narrative to explain events, often amid a societal crisis. As humans, we have a proclivity for narrative; we use stories to teach, inform, and explore—not just to entertain. Essentially, they are a device we use to understand ourselves, our experiences, and our surroundings. A 2018 study published in the journal Topics in Cognitive Science explains that “the specific adaptive value of storytelling lies in making sense of non‐routine, uncertain, or novel situations.” The team of researchers further posits that storytelling is not merely an adaptive activity but also a collective activity, “promoting social cohesion by strengthening intra‐group identity and clarifying intergroup relations.”

Expanding on this component of intergroup relations, Jan-Willem van Prooijen, a behavioral scientist, argues in an article for the United Nations Educational, Scientific, and Cultural Organization (UNESCO) that societal crises do not solely drive the spread of conspiracy theories. He reasons that it is easier to find an explanation in the midst of a crisis by identifying a scapegoat, such as an “antagonistic outgroup.”

Accordingly, conspiracy theories can be understood as a narrative genre that supports people’s attempts to make sense of an event, especially one shrouded in uncertainty, and often operates in parallel to strengthen social bonds.

Linguistic Features of Conspiracy Theories

Many academic disciplines, including political science, psychology, and sociology, study conspiracy theories to better understand why they are created, how they spread, and how they affect society. Concomitantly, linguistic research offers insight into the language characteristics of this phenomenon, examining a broad spectrum of features, from smaller units of language, such as words, to larger units, such as narrative and discursive patterns.

One tool that linguists use to study language is a corpus. This is a database of text, spoken or written, that is representative but not all-inclusive of naturally occurring language. An example in the context of conspiracy theories is the Language of Conspiracy (LOCO) corpus, an 88-million-word corpus developed by a group of scholars to aid researchers in studying features of conspiratorial language on the internet.

As LOCO’s creators explain in a 2022 article in the journal Behavior Research Methods, while the corpus is a collection of documents from conspiracy websites, “we do not yet know what the language of conspiracy is, i.e., to what extent conspiracy language differs from non-conspiracy language.” It’s important to keep in mind that we can’t make definitive claims that a theory is conspiratorial based on the presence of certain linguistic features. Still, we can use that information, which is heavily context-dependent, to say if it’s typically characteristic of conspiratorial language.

Recognizing Loaded Language

Perhaps one of the easiest linguistic features to spot in conspiracy theories is the presence of strong emotional or persuasive language, which researchers Emily Klein and James Hendler aptly call “loaded language.”

Studies of social media have found that conspiratorial language centers around negative emotions. Words that signal anger (e.g., “hate,” “kill,” and “annoyed”) and anxiety (e.g., “worried” and “fearful”) were shown to be significantly more prevalent in a 2024 study that analyzed a 4-million-word dataset of conspiracy discussions collected from Reddit and X, formerly known as Twitter. Similar results were found in Amos Fong et al.’s 2021 study, which analyzed data from influencers (conspiracists and scientists) and their followers on the X platform. Words representing anger (e.g., “damn” and “hell”) were used more often by conspiracy influencers and their followers. Words representing anxiety (e.g., “threat” and “horror”) were used more by conspiracy influencers, though this pattern did not extend to their followers.

Dysphemisms also reflect the intensity of loaded language. On a spectrum of offensiveness, euphemisms are considered more polite, while dysphemisms are ruder. Klein and Hendler exemplify this with the word “die.” Placing this neutral word in the middle of a spectrum, “pass away” is a common euphemism, and “drop dead” is an example of a dysphemism. Their 2022 study found that dysphemisms about death were used significantly more in anti-vaccination discussions compared to vaccination-neutral posts in online parenting platforms.

Klein and Hendler’s study distinguishes “thought-terminating clichés,” a type of reply that acts to stop any further discussion, as another prominent characteristic of conspiracy-focused language. These types of replies imply that the speaker wants to change the current topic of conversation or end it altogether. Some examples of thought-terminating clichés from their research are “anyway,” “agree to disagree,” “it is what it is,” and “everything happens for a reason.” In emotionally charged conspiracy theory contexts, these are used as a persuasive strategy to convince the listener that further analysis or discussion of the matter is unnecessary.

Identifying Themes of Power and Death

When we examine language at a broader level, we begin to identify discursive themes. These are overarching topics that permeate a text, and they can be interpreted through measures like word frequency and “keyness” (how unusual or unique a word is in certain types of discourse).

In the context of conspiracy language, themes of death and power come forward. Fong et al.’s 2021 study found themes of death (“dead,” “killed,” and “war”), religion (“God,” “Jesus,” and “Muslim”), and power (“government,” “military,” and “president”); both conspiracy influencers and their followers showed significantly higher use of words within these themes. As the team reasoned, these results are consistent with familiar themes of conspiracy theories: Many of these theories center around the killing of large numbers of people or the death of a well-known figure, and they place blame on powerful elites or religious groups.

Likewise, Cosgrove and Bahr’s 2024 study of conspiratorial language on the X platform and Reddit showed that conspiratorial discussions contained significantly higher rates of words that are related to death and power. However, their study found no significant differences in words associated with religion.

Of these themes, power is arguably the most important to watch for, as it is a cardinal element of conspiracy theorizing: a belief that a group in power has caused or acted in a harmful event.

Understanding Disorganized Narratives

Finally, in stepping out to yet another broader level of language, we can look at story- and argument-building: how themes and topics are woven together to create narratives. Logically speaking, we want narratives to be balanced and well-organized, with clear explanations and transitions so that the audience can follow along easily.

In a 2022 LOCO corpus study in Science Advances, led by Alessandro Miani et al., conspiracy narratives were shown to be less internally cohesive than non-conspiracy narratives. Specifically, the text from conspiracy documents was less cohesive within a document (more topics discussed and less word overlap among paragraphs), yet more cohesive with other conspiracy documents (more overlapping vocabulary patterns). That is, conspiratorial texts may appear denser because they assemble and discuss a greater number of topics, but are less cohesive in building a logical, easy-to-follow argument.

Miani continued research on conspiracy narratives in another study published in 2024. Using the same LOCO corpus, the team focused on creativity rather than cohesion, analyzing noun compounds (groups of two or more nouns). As a PsyPost article summarizing the study explains, noun compounds, such as “climate change” and “government surveillance,” “encapsulate broad concepts in a compact form.” This linguistic construction is creative and complex because it can convey a wide range of ideas in a relatively short space (a two- or three-word snippet compared to a sentence or a paragraph). While the study found no difference overall in the creativity of conspiracy texts compared to non-conspiracy texts, it did find that the noun compounds in conspiratorial narratives were more “semantically distant” but less topically diverse.

To summarize, some linguistic features of conspiracy theorizing include loaded language, themes of death and power, and disorganized narratives. But identifying conspiratorial language is no simple task. As such, research into this complex phenomenon is still unfolding.

A 2021 report by the RAND Corporation, a nonpartisan public policy research organization, highlights the complexity of conspiratorial language and offers one recommendation for future study: we need to look beyond semantics (what words mean) to the larger picture of rhetoric (overall persuasiveness). In addressing some of the current challenges in machine-learning identification of conspiracy theories, the report states that including analysis of rhetoric (what they refer to as linguistic stance, conveyed through variables such as certainty, reasoning, curiosity, etc.) is key in moving from the identification of “talk that opposes or simply addresses” to “talk that promotes conspiracy theories.”

Another finding of the RAND report addresses the complexity of research on in-group/out-group language. Following the earlier discussion that conspiracy theories can, in part, be driven by in-group/out-group social divides, we might hypothesize that “we/us” versus “they/them” language would indicate conspiracy. However, the report explains that these pronouns may not always be used in an antisocial way. In-group/out-group language patterns are more nuanced. The 2021 study, led by Fong, found no significant differences in out-group language (e.g., “they/them” pronouns) between conspiracy and science influencers, but it did find that conspiracy followers’ use of out-group language was significantly higher than that of science followers.

Building a Digital Media Literacy Tool Kit

So where do we go from here? With the proliferation of information in digital environments comes a greater need for building robust media literacy. It is not just conspiracy theories that demand healthy skepticism, but communication with the power to spread misinformation, disinformation, and fake news.

We rely on trustworthy information to make decisions and navigate our world, and in the 21st century, this means learning to evaluate digital media effectively. A 2024 article by the American Psychological Association recommends a four-part strategy: debunking (fact-checking), prebunking (preventive interventions), literacy training (media judgment education), and nudging (deterring people from spreading misinformation). While they acknowledge that this strategy sidesteps the responsibility of systems and institutions, its benefit lies in giving individuals the agency to consume, judge, and act upon information as they see fit.

Several organizations have published educational resources for assessing information in the digital age. In 2022, UNESCO published guidelines for educators as part of its “Pause” social media campaign responding to COVID-19 conspiracy theories. Libraries also offer guidance on information way-finding, from universities like the University of Toronto to local libraries like the Enoch Pratt Free Library in Baltimore, Maryland. Among these guidelines are resources for fact-checking, example questions for investigating author credibility and potential source bias, reminders to consider your own beliefs, and more.

The following are three acronymic strategies for evaluating the trustworthiness of information online. Consider choosing one that resonates with you most to add to your digital media literacy tool kit:

1. SIFT

The SIFT method was created by Mike Caulfield, an expert in digital literacy, as a four-step process for sifting through online information. Stop to consider your initial reaction as well as what you may (or may not) already know about the topic and source of the information before reading further. Investigate the source of the information by evaluating the author/poster/organization’s mission, authority, potential biases, and any vested interests. Find better coverage: do other sources corroborate or contradict the information? Trace information to its source. This includes checking links or consulting bibliographic references to see whether the information has been presented accurately per its original context. Caulfield points out that this strategy is vital not just for our own information-sifting but as an ethics-forward method to follow before sharing information with others (for example, reposting via social media).

2. CRAAP

The CRAAP test for assessing information was developed by librarian Sarah Blakeslee at California State University, Chico. In the words of Blakeslee, it’s a helpful mnemonic for asking, “Is this CRAAP?” In cases where events are evolving or unfolding in real time, information should be current: is it recent and up to date? Make sure the information is relevant to answering your question. The key here is to refine your search until you find information that is most applicable, instead of settling for close enough. Verify the source’s authority by checking credentials and expertise. Evaluate the accuracy of information; it should be both factual and correctly interpreted. And lastly, question the purpose of the information: Is it biased? What are the motives behind circulating this information?

3. IMVA/IN

The IMVA/IN method from Stony Brook University’s Center for News Literacy is designed for evaluating information from news media. Sources of information should be impartial; be wary of sources with a vested interest in the outcome. Review multiple sources to compare how they represent information. Look for verification of information rather than assertions that lack evidence. Authoritative/informed sources, such as an expert, are more reliable than their non-authoritative counterparts. Generally speaking, named sources foster more credibility than unnamed sources; however, this must be balanced with cases that require or call for anonymity.

When it comes to building a toolbox for recognizing conspiratorial language, it’s more about developing a critical approach rather than an impossibly perfect lie detector test. We can use our judgment of specific linguistic features, combined with their context, as a catalyst for further investigation. As Stephan Lewandowsky and John Cook state in The Conspiracy Theory Handbook, “conventional thinking that values healthy skepticism, evidence, and consistency are necessary ingredients to uncovering real attempts to deceive the public.” This is the same approach we should take with language that seems conspiratorial.