We introduce a social misinformation diffusion model based on confirmation bias and availability bias. The model explains how a person with only one common opinion with a denialist ends up being dragged into the quagmire of denialism and hate speech.
We started in an information sharing economy where social media does the work of providing interesting news, rumors, conspiracy theories and even noise, to encourage users’ interaction.
Social media platforms select customized sources to improve website traffic, increase conversion rates, and propose a variety of affordances as “Like,” or “Dislike,” or “Subscribe” to create a user preference database. This database is built by DPL algorithms that filter data to match individuals’ preference and learn with day-to-day choices. Thus, we argue that social media platforms generate content that promotes attention and encourages users to share it with their social networks.
Proposition 1
Social media platforms maximize interactions via the choices of customized sources.
Availability bias. When the DPL algorithm focus on contents that a user will probably want to see, it will increase the user´s perception of truth about said contents, thus causing judgment distortions, i.e., the information’s importance is based on which one is the most available rather than which one is the most representative. This is the definition of availability bias or availability heuristic. It means that users’ decision-making process is affected by their most accessible information. Luo and Markowitz (2020) state that the amount of ‘likes’ boosts the supposed credibility of real headlines and of fake headlines.
Proposition
A: Users create opinions based on the number of interactions or on the availability of the information
The passive act of searching for information and adding it to their own social media is now an active act of spreading their own opinions and influencing the formation of other individuals’ opinions (Frees and Koch, 2018). Users create opinions based on the interactions and information available in their own social media. This causes another judgment distortion by only processing information that support their own beliefs. This is the definition of confirmation bias. In the social media context, there is a tendency to interpret the social network’s customized information as evidence of one's existing beliefs.
Confirmation bias is an efficient way to process information when overwhelmed with social media information and cannot process each post carefully.
Proposition
B: Users consider the social network’s customized information as evidence of their existing beliefs.
As social media customized information endorses the opinion of individuals, it motivates users to express their opinion, also via social media. A shared content from user to user can form large cascades of resharings. Cheng et al (2014) state that cascades are an information-sharing mechanism where content reaches social media users. They occur when individuals in a population exhibit herd-like behavior. Content sharing has thus become a crucial information discovery tool in social networking web sites (Cheng et al., 2014).
Proposition
C: Social networking information aligned with existing beliefs maximize users’ interactions.
Now we return to the DPL algorithm learning process to feed the looping procedure. The DPL aim is to decide about what users want to see on the platform and to present posts in the best order the algorithm decides. Besides, users are more likely to interact with content that is concordant with their previous beliefs. According to Moravec et al (2019), users are more likely to rely on belief consistent news.
Ruffo et al. (2021) state repetition is a well-known propaganda mechanism exploited by social media. It allows building a reinforcement feedback loop on the user’s online feed.
Proposition
D: Under the DPL algorithm decision, users interact with sources and posts previously aligned with existing beliefs.
The next step is when a fake news publisher or a troll intends to make false claims for personal purpose.
When a fake news publisher posts a polarized opinion, the social media algorithm creates a filter bubble (Pariser, 2011), i.e., a content that the platforms will apply on some user’s online feed. In addition, this content is shared by a user’s connection, so this process triggers the echo chambers (Sunstein, 2002). Ruffo et al. (2021) define echo chambers as tightly knit clusters of individuals that stay interacting until they get radicalized because of a reinforcing feedback loop. Therefore, there is a rise of misinformation and skeptical views about social issues supported by one of the most effective techniques of persuasion for online news, the repetition.
Proposition 3
The fake news publisher posts misinformation selected from sources previously aligned with existing beliefs that will be selected by DPL algorithms to be presented to the user.
Selected sources and posts feed the social media network and posts numbers influence opinions and decisions of others. In a conventional society, face to face interaction allows individuals to consider opposing views in a conversation. It allows for a better exchange of information and establishes trust between people. However, adjustments in the way people live, as lockdowns, shelter in place or social distancing policies, thwart empathy effects.
In a lockdown environment, given the lack of a consensus about an issue, people turn to informal sources of information to share their thoughts and to discuss issues. At the same time, social media news reports potential misleading stories entangled with social media posts feedback. It inhibits the understanding of different or opposing opinions.
Allamong and Peterson (2021) show that empathic ability may play a key role in changing people’s behavior and that empathy depends upon the respondent’s partisanship, the target’s partisanship, and the interaction among these persons. Yet according to them, polarization reduces empathy.
So, an isolated environment can reduce opportunities to create empathy that would otherwise be created by the interaction among people. It also prevents serendipity to expose a user to new ideas with polarization reducing potential.
This results in a situation that reinforces the bubble filters making people more vulnerable to radical opinions, like conspiracy theories and hate speech.