Misinformation took over social media after the Key Bridge collapse. Why is this now a regular occurrence — and what can be done to stop it?

Changes in content moderation and user bases on platforms like X have created an opportunity for conspiracy theories to reach a massive, mainstream audience. (Image via Getty Images.)
Changes in content moderation and user bases on platforms like X have created an opportunity for conspiracy theories to reach a massive, mainstream audience. (Image via Getty Images.) (Andrew Brookes via Getty Images)

Unfounded conspiracy theories influencing online conversations around major news events is a growing concern since at least half of Americans get their news from social media platforms.

What were once considered “fringe conspiracy theories” — ideas that significantly depart from mainstream accepted information or truths — are now highlighted as Top Tweets by verified X users. This is especially problematic when people are visiting the platform to find timely updates on news events like the Francis Scott Key Bridge collapse in Baltimore and Kate Middleton’s health.

“Historically, [X has] been the place that a lot of people associate with the news media,” Walter Scheirer, the author of A History of Fake Things on the Internet, told Yahoo News. “Traditional media is slow to report on [stories] by the internet’s time scale. If I want to know what’s happening second by second, of course, I’m going to turn to [X].”

Sandwiched between updates from news organizations about the Key Bridge collapse, there were posts — sometimes by verified users — that posited baseless conspiracies about what “really” happened. For the average person trying to parse the story, this collection of opposing narratives blurs the lines between what’s factual and what isn’t.

The way the misinformation is presented can also mimic how legitimate news updates look, further confusing readers. In an effort to avoid contributing to the spread of misinformation, Yahoo News will not be addressing specific conspiracy theories.

Particularly on X, formerly known as Twitter, changes enacted by owner Elon Musk — such as cutting the global moderation team and allowing any user to pay for a verification check mark — have been seen as creating chaos and confusion when news breaks. In addition to a verification check mark, paying X accounts also get their posts promoted. X did not respond to Yahoo News’ request for comment.

Conspiracy theories on social media

Fringe theories were once relegated to the dark corners of the internet on platforms like 4chan, the anonymous, loosely moderated online forum where the QAnon movement started. Previously, an average reader of the internet who wanted to engage with this kind of content would have to dig to find it.

But Scheirer argues that since Musk took over, the X user base has changed dramatically. The departure of many legitimate journalists from the platform paired with the bolstering of fringe accounts have blurred the line between facts and speculation. This has created an opportunity for conspiracy theories to reach a massive, mainstream audience.

Musk has faced backlash from experts and journalists for continuing to engage with and promote conspiracy theories on X, which only furthers their reach, and arguably their legitimacy, to more people.

Scheirer explained that conspiracy theories are a “longstanding piece of human communication” that existed well before social media. It’s a coping method for people to try to rationalize major news — which, with the 24-hour news cycle, feels like it’s happening all the time.

“There's always going to be some uncertainty in a big event, like a pandemic, like a terror attack,” Scheirer said. “There's a strong human inclination to tell stories and to filter complex situations through fiction, and the internet, with all of its creative software and its communication capabilities, is just really good at facilitating that.”

It’s not an issue exclusively plaguing X either. Facebook, TikTok and YouTube, which are used as major news sources for Americans, have all been forced to deal with the spread of misinformation on their platforms.

A 2021 study analyzing Facebook user behavior around the 2020 election found that the company’s algorithms promoted misinformation over more trustworthy sources. Researchers found that nearly 1 in 5 TikTok videos on prominent news topics that were suggested by the platform contained misinformation. Fact-checking groups have accused YouTube, which is owned by Google, of being a significant conduit of disinformation.

Should social media platforms be held accountable for misinformation?

The level of accountability for social media platforms has been discussed in the courts recently.

In what was called a "historic" court case, a New York judge on March 19 denied a motion to dismiss a lawsuit against several social media platforms that claimed that those platforms should be held liable for their roles in mass shootings. In June, the U.S. Supreme Court is expected to rule on whether content moderation on platforms is a violation of users’ First Amendment rights.

Misinformation can’t be stopped — and it exists outside of social media — but the question is whether platforms should be held responsible for moderating misinformation better. To experts like Scheirer, it isn’t so black and white. Central regulation will be very difficult to implement.

“[Social media companies] would have to be very heavy-handed in terms of the control of speech on their platforms, which would probably alienate large portions of their user bases, and then the businesses just wouldn't be viable,” Scheirer said. “That's a huge issue.”

Advertisement