Credit: Jodi Hilton Photography | A Chosen Soul/Unsplash
Misinformation isn’t a new problem, but it is an acute one, especially in this moment. Shortly after he purchased Twitter (now X), Elon Musk cut content moderation teams and reinstated the accounts of people who had been banned from the platform for a variety of reasons, including sharing misinformation. Mark Zuckerberg announced in January that Meta was ending its partnerships with third-party fact checkers. And on his first day in office, President Donald Trump signed an executive order targeting fact-checking on social media platforms.
While the justifications for and potential consequences of these changes have been debated, the debates themselves are often not firmly rooted in fact.
“There is broad bipartisan support for reducing the spread of false or misleading information online, and laypeople across the political spectrum think that relying on experts is the most legitimate way to make moderation decisions,” according to a professor of management science and brain and cognitive sciences at MIT.
“Contrary to recent claims from many political elites, the main problem with professional fact-checking is not bias or overreach,” he added. “The problem is that professional fact checkers can’t keep up with the vast scale of content posted every day. Crowd-based systems like X’s Community Notes can be useful tools to help extend the reach of fact-checking, ideally complementing professional fact checkers rather than replacing them.”
Rand, who has been studying misinformation for the better part of a decade, offers three empirically supported insights about online misinformation and content moderation.
Fact-checking may appear biased when it’s not
Anti-conservative bias is one argument against social media fact-checking. In a 2024 paper, Rand and his co-authors found that while it is true that pro-Trump conservatives are sanctioned more frequently than pro-Biden liberals, this does not actually provide evidence that fact checkers are biased. Rather, an analysis of 9,000 politically active Twitter users during the 2020 presidential election found that pro-Trump conservatives were more likely to share links to low-quality websites. This held true even when news quality was judged by bipartisan groups or by Republican laypeople.
“Therefore, even a completely politically neutral policy that targets the sharing of low-quality information will wind up sanctioning conservatives more and potentially appearing biased, despite lack of bias,” Rand said.
The asymmetry in enforcement is what you would expect simply from fact-checking is doing its job — identifying and calling out questionable statements — rather than engaging in political retaliation, he added.
Rand and other scholars recently shared insights about social media content moderation in response to a Federal Trade Commission inquiry into “how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations.”
Fact-checker warnings work — and they’re broadly popular
In a 2023 survey, 80% of respondents said platforms should try to reduce the spread of misinformation.
Fact-checking works not only in process but in achieving its aims, Rand said.
In two studies, one from 2023 and one from 2024, Rand and MIT Sloan PhD candidate Cameron Martel investigated the effectiveness of “warning labels” that fact checkers attach to questionable social media content. They found that adding the labels reduced people’s belief in false headlines by 27% and the sharing of false headlines by 25%. Though smaller, these effects still held true for people who professed distrust of fact checkers: The study found that their belief in false headlines declined by 13% and sharing by 17% when there was a warning label.
“Professional fact-checking is an important part of addressing the spread of false or misleading content on social media,” Rand said. “The main problem for fact-checking is not over-enforcement but rather [that] there’s no way for fact checkers to keep up with all the content posted online.”
Most people support some system of content moderation, he added. A 2023 survey of thousands of Americans conducted by Rand and several colleagues found broad bipartisan support for platforms moderating harmful or misleading content and using the judgment of experts to make such decisions, though there was a partisan divide about what the moderation should look like. When asked whether platforms should try to reduce the spread of harmful misinformation, 80% of respondents said they should — 93% of Democrats and 65% of Republicans. A similar percentage supported the creation of warning labels by independent fact checkers.
Crowd-based approaches should be a complement to professional fact-checking
Rand and his colleagues have demonstrated that harnessing the judgments of laypeople is a relatively effective way to identify false or misleading information at a larger scale than fact-checking alone. The most prominent example of a layperson-based tool is X’s Community Notes, a crowdsourced fact-checking program. One study that focused on information about the COVID-19 vaccine found that Community Notes agreed with expert judgment about 97.5% of the time.
Related Articles
“This ‘wisdom of the crowds’ approach is particularly useful for its inherent ability to work at scale,” Rand said. Laypeople often view content first and can respond quickly, and they can flag content overlooked by platforms. Paid fact checkers are more limited in number: One of Meta’s central fact-checking partners was able to review only about 125 stories per month at the height of Meta’s efforts.
But crowdsourced approaches to misinformation should be paired with expert fact-checking, Rand said, not used as replacements. The 2.5% of disagreement between Community Notes and expert judgement is important to resolve, given the quantity of information on social media platforms, Rand said. And in many Community Notes types of programs, a content note is shown to users only when it achieves consensus across the political spectrum, which means that it’s harder to label falsehoods that are politically contentious — and thus partisan disagreement can prevent helpful notes from ever being publicly displayed. In these cases, expert fact checkers are essential, Rand said.
“We do have a lot of work suggesting that crowd ratings are useful — but as an addition to professional fact-checking, not as a substitute,” he said. “And this is what the American public wants.”