Misunderstanding the Harms of Online Misinformation

A series of humanoid figures pace back and forth in front of a large screen with a brain in the foreground, representing the influence of misinformation.
In a new paper, researchers at the Computational Social Science Lab find that the influence of social media algorithms on misinformation consumption has been widely misunderstood. (Wanlee Prachyapanaprai via Getty Images.)

In 2006, Facebook launched its News Feed feature, sparking seemingly endless contentious public discourse on the power of the “social media algorithm” in shaping what people see online.

Nearly two decades and many recommendation algorithm tweaks later, this discourse continues, now laser-focused on whether social media recommendation algorithms are primarily responsible for exposure to online misinformation and extremist content.

Researchers at the Computational Social Science Lab (CSSLab) at the University of Pennsylvania, led by Duncan Watts, Stevens University Professor in Computer and Information Science (CIS), study Americans’ news consumption.

In a new article in Nature, Watts, along with David Rothschild of Microsoft Research (Wharton Ph.D. ‘11 and PI in the CSSLab), Ceren Budak of the University of Michigan, Brendan Nyhan of Dartmouth College and Annenberg alumnus Emily Thorson (Ph.D. ’13) of Syracuse University, review years of behavioral science research on exposure to false and radical content online and find that exposure to harmful and false information on social media is minimal to all but the most extreme people, despite a media narrative that claims the opposite.

“The research shows that only a small fraction of people are exposed to false and radical content online,” says Rothschild, “and that it’s personal preferences, not algorithms that lead people to this content. The people who are exposed to false and radical content are those who seek it out.”

Read the full story on the Annenberg School’s website

Share: