Misinformation and online mental health communities
Due to the increasing popularity of the internet (1, 2), more specifically of social media (3) as a venue for seeking and sharing health information, there is a growing concern about the spread of health misinformation (4, 5). Recently, these concerns have intensified due to the COVID-19 pandemic (6). A recent systematic review of reviews found that the prevalence of health-related misinformation on social media ranged from 0.2–28.8% (7).
Swire-Thompson and Lazer (8) define misinformation as “information that is contrary to the epistemic consensus of the scientific community regarding a phenomenon”. Health misinformation is a specific type of misinformation that refers to a “health-related claim of fact that is currently false due to a lack of scientific evidence” (9). As information on social media is generated by users, it can be subjective or inaccurate, therefore a worrisome source of health misinformation (6, 10) as it can also be archived and persist over time until it is corrected or deleted, becoming a dangerous resource for future health information seekers (11).
Public health researchers and practitioners are increasingly preoccupied with the potential for health misinformation to misinform and mislead the public as it not only creates erroneous health beliefs confusion and reduces trust in health professionals but can also “delay or prevent effective care, in some cases threatening the lives of individuals” (5, 10). Thus, combating its effects has become crucial for public health (9, 12) and can be accomplished only by understanding its psychological drivers (13) and complementary buffers. One particularly prominent finding that helps explain why people are susceptible to misinformation is the ‘illusory truth effect’, according to which repeated information is perceived as more truthful than new information (14–18).
Online communities on social networking sites have been identified as places that can easily spread health misinformation (19, 20), particularly by creating echo chambers where erroneous information is reinforced as frequently repeated (10, 11). The prevalence of information of low relevance and questionable validity in online health communities is well-documented (23–25).
Online communities specifically for mental health symptoms (OCMHs) are increasingly present on social networking sites, especially among younger generations (26). They can also be considered an outlet for misinformation. A recent content analysis [in peer review] has found extremely high levels of misinformation in OCMHs, and even communities moderated by health professionals (expert-led) were not exempt from this issue. This is in line with other studies showing that healthcare professionals can also spread misinformation in various ways (10).
Online communities generally rely on the work of volunteers to police themselves (27), some of them being health professionals, others peers with no expert credentials (28–30). Although healthcare professionals play a critical role in ensuring information quality in online health communities (31, 32), the literature on this topic is scarce, especially regarding the differences that these two types of groups might perform, particularly in relation to misinformation.
Addressing misinformation in mental health is crucial for two key reasons. Firstly, mental health is a growing public health concern (33), which has been underestimated even though about 14% of the global disease burden has been attributed to neuropsychiatric disorders such as depression (25). Secondly, mental health conditions are frequently stigmatized and misunderstood, resulting in a greater prevalence of misinformation online and offline (35–37). Nevertheless, few studies have examined misinformation regarding mental health specifically. Therefore, it is crucial to understand the conditions that can mitigate the outcomes of mental health misinformation exposure.
In the context of the illusory truth effect, it has been found that the effects of repeated exposure to misinformation on perceptions of accuracy disappeared when the receiver knew the actual truth and that people were more likely to believe misinformation when they were unfamiliar with the issue at hand (18). Other studies have shown that knowledge is key to buffering against misinformation exposure (39–41).
Health literacy is at the heart of any discussion of health-related misinformation, which can be defined as “the degree to which people have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions” (42). In a systematic review, results indicated that low health literacy was negatively related to the ability to evaluate online health information (19). Previous studies have shown that knowledge moderated the relationship between exposure and beliefs but focused on other aspects of literacy, such as news literacy (16) or media literacy (17). One study found that a higher level of cancer literacy helped participants identify misinformation and prevented them from being persuaded by it (22).
However, solely concentrating on individual differences in susceptibility to misinformation while ignoring other potential external factors may not be the most effective approach. As we mentioned earlier, OCMH typologies can vary depending on their moderators' expertise. Previous research has demonstrated that the knowledge and guidance provided by peer patients differed significantly from that offered by professional healthcare providers (47). Therefore, it is critical to investigate potential differences between these groups in misinformation exposure outcomes.
Current Research And Hypotheses
Based on the above literature, the present study aims to investigate the relationship between exposure to mental health misinformation in Italian online communities for mental health and related agreement, focusing on two aspects that might impact this relationship and the interplay between them: depression literacy and type of OCMHs moderation.
As in the health context, different types of literacy exist in different contexts, in the present study, we focused on declarative knowledge about depression literacy, depression being one of the most common mental illnesses in Italy (48, 49) and also worldwide (50). Depression literacy is a facet of the mental health literacy concept, defined as an individual’s knowledge regarding mental health (23). Mental health literacy, or lack thereof, has been used as a possible factor to explain uncertainties or lack of knowledge about mental health and the ensuing effects on effective treatment and care (24). Determining whether depression literacy levels can buffer the effects of misinformation exposure is critical, also as identifying which segments of the population are especially vulnerable to health misinformation and developing interventions for individuals at risk.
However, also external factors can influence the relationship between exposure and agreement with misinformation, including aspects related to the expertise of content moderation of the communities.
Whether healthcare professionals or not, moderators have various tools, such as deleting content, suspending users, or correcting inaccurate information. However, given that the quality and accuracy of online health information provided by OCMHs can vary significantly (33), it is crucial to consider the potential implications of participating in OCMHs with different content moderation types, as this may affect the degree of exposure to health-related misinformation and its associated consequences. Furthermore, the interplay between internal (depression literacy) and external (type of OCMHs moderation) factors sheds light on the most vulnerable individuals within online communities. This approach will expand upon previous research on individual differences in the misinformation susceptibility (16, 41) and provide a more comprehensive understanding of the complex dynamics that may influence agreement with misinformation exposure within OCMHs.
Hypotheses And Research Question
We tested the following hypotheses using a Moderated Moderation Model (see Fig. 1 for the hypothesized model).
First, based on the literature on the Illusory truth effect (34), we expect that:
H1: Misinformation Exposure will be positively and significantly associated with Misinformation Agreement).
However, based on the above literature on the influence of the protective role of knowledge and the differences that might emerge in different types of expert content moderation, we hypothesize that:
H2: The positive association between Misinformation Exposure and Misinformation Agreement will be moderated by depression literacy.
H3: The positive association between Misinformation Exposure and Misinformation Agreement will be moderated by type of OCMHs participation.
Then, as a research question, we will test whether Depression literacy and Type of OCMHs participation also interact with each other in the following way:
RQ: The moderating effect of depression literacy on the relationship between Misinformation Exposure and Misinformation Agreement will be further moderated by the type of OCMHs participation.