Here is why conspiracy theories spread so fast on Twitter

Updated
Maquoketa, Iowa - September 20: Former president Donald Trump speaks to supporters during a commit to caucus rally, Wednesday, Sept. 20, 2023, at the Jackson County Fairgrounds in Maquoketa, Iowa.
(Photo by Jabin Botsford/The Washington Post via Getty Images)
Former president Donald Trump helped spread the baseless conspiracy theory around voter fraud (Photo by Jabin Botsford/The Washington Post via Getty Images) (The Washington Post via Getty Images)

Conspiracy theories spread rapidly on social media networks such as X (formerly known as Twitter) - and now scientists are closer to understanding why.

After using a simulation of Twitter, researchers believe that a ‘negativity bias’ (the habit of reposting social posts that contain negative emotion) helps conspiracy theories to spread.

The researchers analysed tweets claiming widespread voter fraud in the 2020 U.S. election, a debunked conspiracy theory promoted by President Donald Trump.

The researchers, led by Mason Youngblood, simulated the behaviour of around 350,000 real Twitter users (based on a real record of how the conspiracy theory spread on the social network).

They found that the sharing patterns of some four million tweets about voter fraud matched the fact that people are much more likely to retweet social posts that contain stronger negative emotion.

Youngblood said: "Conspiracy theories about large-scale voter fraud spread widely and rapidly on Twitter during the 2020 U.S. presidential election, but it is unclear what processes are responsible for their amplification”

"Our results suggest that the spread of voter fraud messages on Twitter was driven by a bias for tweets with more negative emotion, and this has important implications for current debates on how to counter the spread of conspiracy theories and misinformation on social media.”

Read more: 5 conspiracy theories that turned out to be true

A 1988 warning about climate change was mostly right

The data for their study came from the VoterFraud2020 dataset, collected between October 23 and December 16, 2020.

This dataset includes 7.6 million tweets and 25.6 million retweets that were collected in real-time using Twitters's streaming Application Program Interface.

The team ran simulations of individual users tweeting and retweeting one another under different levels and forms of cognitive bias and compared the output to real patterns of retweet behaviour among proponents of voter fraud conspiracy theories during and around the election.

The researchers say their results are consistent with previous research suggesting that negative stories tend to ‘float to the top’ in discussions on social media across a variety of domains, including news coverage and political discourse.

Youngblood says that the team’s research may potentially be useful for simulating interventions against misinformation in the future.

The model could be easily modified to reflect the ways that social media companies or policy makers might try to curb the spread of information, such as reducing the rate at which tweets hit people's timelines.

Why so negative?

Research published this year in the journal PNAS Nexus shone more light on the well-known phenomenon that negative content tends to share better on social media.

This happens despite the fact the majority of social media content is actually positive in tone, the researchers say.

The researchers wrote: "Social media users tend to produce content that contains more positive than negative emotional language. However, negative emotional language is more likely to be shared.’

The researchers found that negative news or content tends to share even more widely when shared by famous people.

"We found that negativity boosts the likelihood of public figures' content being shared, more than ordinary users. This is explained by the fact that negativity is more likely to be shared for weaker social ties and political content," they added.

Advertisement