In The Politics of Autism, I analyze the discredited notion that vaccines cause autism. This bogus idea can hurt people by allowing diseases to spread. And among those diseases could be COVID-19.
Antivaxxers are sometimes violent, often abusive, and always wrong.
Online misinformation promotes distrust in science, undermines public health, and may drive civil unrest. During the coronavirus disease 2019 pandemic, Facebook—the world’s largest social media company—began to remove vaccine misinformation as a matter of policy. We evaluated the efficacy of these policies using a comparative interrupted time-series design. We found that Facebook removed some antivaccine content, but we did not observe decreases in overall engagement with antivaccine content. Provaccine content was also removed, and antivaccine content became more misinformative, more politically polarized, and more likely to be seen in users’ newsfeeds. We explain these findings as a consequence of Facebook’s system architecture, which provides substantial flexibility to motivated users who wish to disseminate misinformation through multiple channels. Facebook’s architecture may therefore afford antivaccine content producers several means to circumvent the intent of misinformation removal policies.
From the article:
Our findings suggest that Facebook’s policies may have reduced the number of posts in antivaccine venues but did not induce a sustained reduction in engagement with antivaccine content. Misinformation proportions both on and off the platform appear to have increased. Furthermore, it appears that antivaccine page administrators especially focused on promoting content that outpaced updates to Facebook’s moderation policies: The largest increases appear to have been associated with topics falsely attributing severe vaccine adverse events and deaths to COVID-19 vaccines. Since engagement levels with antivaccine page content did not differ significantly from prepolicy trends, this potentially reflects vaccine-hesitant users’ desire for more information regarding a novel vaccine at a time when specific false claims had not yet been explicitly debunked. This underscores a need to account for, and address, the forces driving users’ engagement with—i.e., demand for—misinformative content.