BotX

How Bots Manufacture Reality

6 min read


How do bot networks create artificial agendas and spread false information on X?

How Bot Networks Manufacture an “Artificial Agenda” on X (Twitter) — and How False Information Gets Amplified

On X/Twitter, an “artificial agenda” is rarely created by one single gigantic lie. More often, it is created by many accounts making a small effect look like a massive public wave. Fully automated bots and semi-automated operator accounts exploit a structural weakness of the platform: when visible engagement metrics (likes, reposts, replies, quote-posts, views) rise fast, content starts to feel “important,” and once it feels “important,” it travels further.

The core trick: manufacturing attention, then borrowing credibility from that attention.

The operation is not primarily optimized for truth; it is optimized for spreadability. It tries to make you think: “Everyone is talking about this,” so you share it, react to it, or build your own commentary on top of it. At that point, the network doesn’t need to persuade you—your reaction becomes free distribution.

1) How an artificial agenda is built: coordinated amplification signals

Artificial agenda-setting works by producing a crowd illusion. Coordinated accounts can push the same claim, link, or framing through repeated interaction patterns, especially via reposts and reply storms. Research on coordinated behavior highlights that retweet/repost dynamics are a key lever because they can rapidly alter information cascades and visibility. When many accounts interact in a patterned way, the platform sees “momentum,” and humans see “consensus.”

What “evidence” looks like at scale (not anecdotes):

A major Pew Research Center analysis of roughly 1.2 million English-language tweets linking to popular websites found that an estimated 66% of tweeted links were shared by accounts with characteristics common among automated or partially automated bots, and that among popular news/current-events sites the share was also 66%. Crucially, the study also showed how a relatively small set of highly active bot-like accounts can contribute a disproportionate share of link-sharing volume to prominent news categories. In other words: a small, highly active layer can “inflate” what the platform looks like it is collectively discussing.

2) How false information is propagated: “engagement engineering” more than argument

False or misleading content is often packaged to trigger rapid reactions: anger, fear, disgust, humiliation, tribal identity cues (“us vs them”), and urgent calls to action. A large-scale study of rumor cascades on Twitter found that false news tends to diffuse farther, faster, deeper, and more broadly than true news—and importantly, that the primary driver of this difference is human behavior, not bots. This matters for awareness: bot networks can help create the initial pressure and visibility, but humans often carry the content to the finish line once it becomes emotionally and socially “usable.”

3) How provocation works: shifting people from verification to faction

Provocation does not require you to believe the claim fully. It only requires you to react. The goal may be less “persuasion” and more “friction production”: forcing people into identity-based positions, turning discussion into conflict, and keeping the platform in a high-arousal state where nuance dies and shortcuts win.

When emotion rises, verification falls. The person is pushed from “Is this true?” to “Which side am I on?” That shift is the fuel of artificial agendas.

4) Why this is not just theory: platform disclosures and public archives

X/Twitter itself has published large archives of accounts and content linked to suspected information operations. In October 2018, Twitter released comprehensive datasets associated with potential information operations, including accounts it linked to the Internet Research Agency (IRA) and an Iran-attributed operation, describing the release as a step to enable independent academic research. In later disclosures, Twitter described publishing dozens of datasets of attributed platform manipulation campaigns over multiple years and removing associated accounts. Separately, Twitter’s own retrospective review of election integrity work described identifying tens of thousands of automated, Russia-linked accounts tweeting election-related content during the 2016 period—illustrating that “malicious automation” is something the platform has documented as a real phenomenon while also noting that raw volume does not automatically equal broad impact.

5) What researchers measure when they say “coordination”

Coordinated networks can be uncovered by looking at shared behavioral traces: repeating the same retweet targets, matching content or URL patterns, synchronized timing, and co-retweet behavior (groups of accounts retweeting the same posts or the same set of accounts). Research has shown that coordinated accounts in retweet-based cascades can occupy higher positions in the cascade and spread messages faster than non-coordinated accounts, which helps explain why “manufactured attention” can look like organic momentum. These are measurable patterns—useful for analysis and for awareness—yet still imperfect, because bot and coordination detection can produce false positives and must be interpreted cautiously.

6) A practical awareness checklist (low-tech, human-friendly)

You don’t need advanced tooling to reduce the risk of becoming a carrier of an artificial agenda. Before reacting or sharing, try these checks:

• Source check: Is there a primary source (official statement, full document, original video with context), or only screenshots and repost chains?

• Cross-verification: Is the claim confirmed by multiple independent outlets, or is it “many accounts repeating the same sentence”?

• Pattern check: Are many posts using identical phrasing, identical hashtags, or identical links in a short window?

• Account behavior: Do the loudest accounts post at unnatural frequency, push a single topic constantly, or mostly repost rather than create verifiable original reporting?

• Emotion check: If your immediate feeling is intense anger/fear/shame, treat it as a “slow down” signal. High arousal is where manipulation performs best.

Closing: visibility is not truth, and trends are not evidence.

Artificial agendas thrive on a simple confusion: mistaking “high visibility” for “high validity.” A claim can be everywhere and still be wrong; it can be trending and still be manufactured; it can be emotionally satisfying and still be engineered. The most effective defense is not cynicism—it is verification discipline: pause, check primary sources, and refuse to donate your attention to content that is optimized to provoke rather than inform.

Sources:

1) Pew Research Center (Apr 9, 2018) — “Bots in the Twittersphere: An Analysis of the Links Automated Accounts Share.”

2) X (Twitter) Company Blog (Oct 17, 2018) — “Enabling further research of information operations on Twitter.”

3) X (Twitter) Company Blog (Dec 2, 2021) — “Disclosing state-linked information operations we’ve removed.”

4) Twitter (Feb 4, 2019) — “Retrospective Review: Twitter, Inc. and the 2018 Midterm Elections.”

5) Vosoughi, Roy, Aral (Science / MIT PDF, 2018) — “The spread of true and false news online.”

6) Pacheco et al. (ICWSM 2021) — “Uncovering Coordinated Networks on Social Media: Methods and Case Studies.”

7) Cinelli et al. (Decision Support Systems, 2022) — “Coordinated inauthentic behavior and information spreading on Twitter.”

8) Indiana University OSoMe — Botometer (bot detection tool) and supporting documentation/papers.

9) X Transparency (EU DSA) — Systemic Risk Assessment reports (2024 report; 2025 summary).

Share: Facebook X LinkedIn WhatsApp Telegram
Authors: &