Social media misinformation and likes.

Is social media misinformation a threat to society?

Something shifted in online content around 2016. Suddenly, social media misinformation became a dire threat, graver than ever, and we had to do something about it. It seems realistic, considering a powerful country left the European Union because of supposed stupidity, people died because of vaccine misinformation, and Russian bots and troll farms influenced – and keep influencing – elections. These events alone should convince anyone and their mother about the potential harms of social media.

But, if we think about it, why do we believe this is a general problem? Could it be that we accept the these claims at face value simply because they seem so plausible?

In a recent Perspective published in the scientific journal Nature, Ceren Budak and colleagues discuss the most common misperceptions and arguments about social media misinformation and its harmful effects. They highlight that claims often made broadly by intellectuals and journalists in the media, frequently lack empirical support. The authors specifically focus on three common arguments:

  1. Average exposure to misinformation is high and growing; as a result, a substantial fraction of the population is exposed to it frequently.
  2. Exposure to this content is primarily driven by the platforms’ algorithms rather than by individual users deliberately seeking out such content.
  3. Correlations between exposure to false and extremist content on social media platforms and undesirable.

Although these claims seem plausible, we should consider the supporting evidence. This scrutiny is crucial as related arguments affect our approach to free speech, online regulation, and censorship. Fortunately, Budak and colleagues provide a refreshing counterpoint, encouraging more debate and scrutiny of these dominant arguments.

The volumes of harmful content and misinformation on social media

Especially since 2016, we’ve faced repeated warnings about the growing exposure to false and extremist information on social media.

Reports may provide staggering statistics about the number of views associated with extreme or false content. However, these figures often lack context, missing the bigger picture of the vast volumes of content on social media platforms.

Budak and colleagues draw our attention to Facebook’s alarming reports indicating that Russian-troll content allegedly reached 126 million US citizens before the 2016 presidential election. However, the authors highlight that the content represented 0.004% of the content that US citizens saw in their news feeds.

“We acknowledge that these figures are huge in absolute terms,” the authors noted, “but in context, we believe that their effects are likely to be small given that they represent a tiny proportion of total information flows on the platforms.”

They emphasized that previous research indicates that content requires prolonged and repeated exposure to have even brief media effects. Moreover, these effects are further diluted by competing sources of information and our ability to opt out of news.

Do social media algorithms affect behaviors?

Another common misconception relates to the algorithms of social media platforms. We often hear that filters or information bubbles trap users and expose them to inflammatory content. However, in practice, we have little evidence of the algorithms’ impact on beliefs and behaviors.

Instead of the algorithms dictating our news consumption, our content diet represents our existing preferences. In other words, people who consume deviating content often look for it specifically.

Remember that these findings don’t neglect algorithms’ direct or indirect impact on our beliefs and behaviors. They might well affect us. However, that particular evidence isn’t clear. As a result, the narrative connecting algorithms with the harmful impact of social-media misinformation seems exaggerated. And while the news seemingly inflates the algorithms’ ability to shape us, they downplay our active demands for specific information.

With all this in mind, is possible that people’s preferences for more extreme or unreliable content derive from general distrust? Putting it differently, are arguments linking social media content and extreme behaviors correlational or causational? If causational, what causes what?

Let’s have a look at that.           

Our searches reflect our thoughts

The effect of social media on polarized or polarizing ideas and behaviors may seem self-evident. For example, increased social media use coincides with more radadical views and distrust toward social structures, including governments and media.

However, Budak and colleagues point out that although social media use and polarization correlate, news and media have often overemphasized and misdirected their causal relationship.

For example, one study suggested that reduced Facebook use had little to no effect on affective polarization. Affective polarization refers to tribalistic feelings toward specific in- or out-groups. In fact, data seem to suggest the opposite causal relationship: affective polarization predicts social media behaviors.

Even though anti-establishment movements or anti-vaccine advocates use social media platforms to coordinate or spread information, the causal relationship is, at best, unclear. Simply speaking, evidence does not strongly support broad statements about social media’s influence on social unrest, polarization, or general knowledge.

But, of course, such causal relationships are next to impossible to confirm. The authors of the Nature Perspective paper acknowledge that such a study would require researchers to systematically limit or vary access to social media and evaluate the outcomes accordingly. In other words, we simply don’t know how or that social media significantly affects our behaviors, for example, toward society.

Time to refocus and find the root causes

This Nature Perspective is a noteworthy addition to the discussion about social media misinformation. So, what can we learn from it?

For one, we can learn about possible real-life fallacies and confirmation bias. Hasty generalization is a fallacy that occurs when someone makes a broad or sweeping statement based on a small or unrepresentative data sample. The persons committing the fallacy may draw a conclusion about a whole group or category based on insufficient evidence.

Although advocates for stricter regulations have made sweeping claims about the harmful impact of social media, the supporting evidence remains unclear. In other words, we observe a potential case of hasty generalizations with a touch of confirmation bias.

Additionally, I’ve always found the particular focus on social media content in the misinformation discussion puzzling. How come we inevitably assume that the harmful effects of misinformation originate among the average Joes and Janes? Why don’t initiatives to battle misinformation, such as fact-checking organizations and media literacy programs, hold mainstream media and leading politicians to the same accountability standards?

These questions deserve proper attention, even if you agree with the current narrative. Budak and colleagues shared their take:

“Traditional news, and in particular television news, still dominates people’s news consumption and political elites seek to shape that news coverage. As a result, the mainstream media are a key mechanism for exposing broad audiences to false claims, which often originate with political elites.”

How often do we get things spelled out like that?

We’re often told that individuals should refrain from researching, asking questions, or publishing inflammatory or false content. We’re also told we can rely on regulations and pressure campaigns on Big Tech to keep us sheltered from harm.

Still, few of these voices express condemnation when established news outlets share ill-advised or false information. Also, few suggest engaging people in meaningful discussions to improve the general knowledge about issues that matter. And even fewer acknowledge the existing evidence of causal relationships between economic insecurities and populism, extremism, and distrust.

Apparently, instead of enhancing general critical and scientific thinking, we should remain protected from what we don’t understand. As we’ve seen before, self-evaluation and introspection within criticized social institutions shine with their absence. Blaming widespread ignorance and regulating or censoring false or uncomfortable content is easier.

Considering our limited knowledge about social media misinformation, maybe the real issues lie beyond regulating content. Maybe it’s time to lower the assertiveness in those arguments, start promoting intellectual self-defense (in the words of Chomsky), and let people participate in discussions and decision-making. You know, meaningful engagement.

Did you like this article? Here’s how you can support. ​

Your help can make a significant impact.

If you enjoyed this article, please take a moment to like and share it. Your actions help us reach a wider audience.

Consider contributing to our project by clicking the Ko-fi button below (or this link). Your donations enable us to create more high-quality content for you.

Also, don’t miss out on future updates—subscribe to our newsletter today!

Share the post:

LinkedIn
Facebook
Twitter

Connect on social media:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This is Ivory Embassy

Ivory Embassy’s blog aims to ignite and inspire your curiosity and independent and critical thinking—molding a scientific mindset one step at a time.

Subscribe to our free newsletter below to receive stories, updates, and tips, all to inspire you to think independently, solve problems, and learn. 

Enjoy the stories, and feel free to reach out with questions and content suggestions.

Connect on social media: