Misinformation and Disinformation: An Increasingly Apparent Threat to Global Health Security – Part II

Since our last post, the issue of health-related mis and disinformation has continued to gain currency, particularly in light of measles outbreaks both in the US and many countries abroad. Health-related misinformation occurs organically through information sharing from ill-informed individuals, disinformation, on the other hand, is the direct result of an orchestrated effort from a nefarious actor. Although the final product of disinformation and misinformation activities is similar, focusing on the differences in their development can provide opportunities for intervention.   

Over the past decade, orchestrated disinformation campaigns have bent social media platforms to their will, mounting public pressure and forcing companies to respond. Here, we’ll dive more deeply into how several key social media companies have begun to address the presence of misinformation on the platforms, offer a preliminary assessment of the sufficiency of these efforts, and provide some additional considerations for the future. 

The Modern Social Media Landscape

Currently, six social media sites have active user bases above one billion people. Of these six, Facebook owns four (Facebook, WhatsApp, Facebook Messenger and Instagram), Google owns one (YouTube), and the Chinese company Tencent owns the other (WeChat). The next few sections will look at how three social media platforms have handled health related misinformation. 

The three platforms featured in this blog, Facebook, YouTube, and Twitter, reach global audiences and represent well-documented case studies that share similar experiences as well as unique challenges in their efforts to address mis- and disinformation.

Facebook 

From a content reliability standpoint, Facebook has been under mounting pressure to address disinformation following the use of their platform to spread disinformation during the 2016 U.S. presidential election. Senate hearings and excerpts from the Mueller report observed the threat that disinformation campaigns can have on a nation’s democracy and demonstrated how a foreign state may be able to extend its influence. 

Among other things, this event and the ensuing public scrutiny brought issues around the identity and responsibility of social media platforms, and the potential need for a modernized regulatory approach to those platforms, to the forefront of our national dialogue.

In 2017, Facebook officials recognized their role in the increasing prevalence of disinformation, and listed several “key areas” where they are working to address the problem: 

-      Disrupting the economic incentives of misinformation (because most false news is financially motivated)

-      Building new products (to curb the spread of false news)

-      Helping people make more informed decisions (when they encounter false news)

Facebook took actions to work toward achieving the goals of these three key areas. The social media titan has publicly shared its efforts to remove harmful accounts & groups from the platform, has altered its internal algorithm and developed AI to try and deter the dissemination of false information, and has injected large amounts of money into journalistic efforts designed to build an informed public. Some metrics show that the site’s efforts to date to stall the spread of misinformation may have resulted in a small, yet positive, impact, and public statements from CEO Mark Zuckerberg  show that they now are proactively calling for regulation that could curb harmful content on the internet.

While these efforts address several key issues in the battle against mis and disinformation, it is fair to question how immediate and thorough the impact of the actions will be. Facebook has dedicated a large portion of its effort on addressing misinformation during democratic elections, a trend also seen with other social media platforms. And, while it is obvious that protection of global democracies should be a top priority, it raises concerns that issues like health-related misinformation may fall to the wayside.  

Facing rising measles cases and press attention highlighting Facebook’s role in growing anti-vaccination sentiments, the platform released a plan to ‘combat vaccine misinformation’. This was one of the company’s first public plans that directly addressed a health misinformation issues, and although it is a positive step in the direction of solving the problem, the scope of the effort does not match the scale of the vaccine misinformation challenge, or the entire swath of health related misinformation that flourish on their site. The steps that Facebook made in their plan to address vaccine misinformation are rationale and are well-intentioned, but, as referenced above, the problem has persistedbeyond their initial efforts. It is encouraging to see this leader in social media directly address this challenge, but they must continue to refine their efforts if they wish to make a meaningful impact on their site. Moving forward, Facebook should:

-      Continue their ongoing efforts to address misinformation. To its credit, Facebook does acknowledge that this problem will require a continuous and committed effort, and that the problem is not yet solved.

-      Develop groups, like their election war rooms, that have a primary focus on identifying and stopping the spread of obvious health-related misinformation. Due to the speed at which information moves through social media, a proactive approach is likely to be more effective than a reactive one.

-      Ensure that the future direction of the social media platform considers the implication of misinformation dissemination as a key feature in their decision making process. 

YouTube

Most social media platforms have acknowledged that their algorithms play a role in the spread of misinformation, but it’s important to recognize that there are degrees to how efficiently this occurs. YouTube in particular has long been known for its tendency to direct viewers down a rabbit hole of related content. This is designed to engage a viewer, keeping them on the platform consuming both new content and advertisements. Unfortunately, there is documentation that this algorithm may be particularly good at spreading conspiratorial-minded content, a dangerous mechanism for the spread of misinformation. As far back as 2007, researchers noted that the site had become a ‘breeding ground’ for misinformation. Despite these early warning signs, however, YouTube took few definitive actions to counter this challenge.  

In March of 2018, YouTube CEO, Susan Wojcicki sat down for a long interview at the annual South by Southwest conference in Austin, Texas. She shared views that stressed the importance of free information and emphasized that false information was only a minimal portion of YouTube’s portfolio. The interview addressed questions about the platform’s algorithm, their role in an increased presence of radicalization, and what they would do moving forward.

Following that interview, YouTube pursued a number of different actions to address misinformation on their platform. In July, they invested $25 million into efforts to integrate trusted sources of news onto their site.  In August they joined other social media sites in the removal of Alex Jones’ conspiratorial materials, and, finally, in January of this year, they made changes to their algorithm to stem the spread of  ‘borderline content’. This was strategically timed with Google’s publication of their own plan to fight disinformation, a detailed document that contained sections pertaining specifically to YouTube. It seemed that YouTube may have finally succumbed to the pressure to address this long-simmering problem, and some analysts have shared cautiously optimistic thoughts regarding their new policies.

Regrettably, YouTube’s history as a dissemination mechanism for misinformation extends into the realm of public health. YouTube hosts a panoply of questionable health related content, ranging from miracle cures to plastic surgery, and were a key player in the spread of conspiracy theories regarding vaccine safety. In early 2019, evidence surfaced that YouTube’s algorithm was suggesting anti-vax materials during videos sharing valuable vaccine information, prompted the platform to demonetize videos sharing this harmful information. This was a step in the right direction, but another example of how action to address health misinformation required a heightened sense of public awareness to stir action. This is disconcerting considering that the spread of health-related misinformation on low profile issues may continue to go unaddressed.  

YouTube has a long road ahead of them, the last few months have featured several steps in the right directions. Like the other social media titans of their day there needs to be a heightened sense of self-awareness and a better understanding of the platforms role in public discourse to ensure that they are providing safe information to the public. 

Twitter

In the fall of 2018, a group of researchers outlined a disinformation campaign targeting individuals engaging in debates around vaccine safety on Twitter. As was the same with Facebook and YouTube, Twitter faced issues with the design of their information sharing algorithms. If anything could be deemed a common denominator in the role that social media has played in the widespread dissemination of misinformed content, it is sharing algorithms being leveraged in unintended ways. 

Twitter’s was built as a forum for public interaction on a myriad of different topics. Public posting, short messages, and hashtags pull users together into a streamlined conversation. Twitter’s algorithm is designed to encourage this phenomenon, and is successful at igniting conversations on singular events or issues. While there are a number of positive applications for this format, it has frequently been coopted for nefarious purposes. The same mechanisms built to bring people together, are now being manipulated to drive groups further apart. Bots are an efficient tool for inundating a topic with tailored messages, and trolls have encountered little resistance in their efforts to ‘spam’ and harass those with opposing views.  It is not surprising that these two devices were the primary perpetrators in the previously referenced disinformation campaign, and that they have been the main target for Twitter’s actions to curb misinformation. 

In the early summer of 2017, Twitter’s VP of Public Policy, Colin Crowell, posted an article on the their  blog titled “Our Approach to Bots and Misinformation.” This article was notable for its admission that the site had some responsibility for the presence and dissemination of misinformation online, and provided some detail on possible actions that the company would take to mitigate these risks in the future.

Twitter’s leadership announced their  ‘new approach’ later in March, a multi-faceted plan incorporating diverse review mechanisms designed to identify accounts actively contributing to an unhealthy information atmosphere. Following this announcement the platform laid out a number of different policies including a new requirement that accounts be linked to a phone number or email, updating their algorithm, and even shifting the appearance of de-identified accounts away from their classic egg avatar. These changes were summarized in an official Twitter article that came out almost one year after their initial commitment to addressing some of the platform’s looming issues.

Shortly after this article was posted, Twitter began its arguably most aggressive action to address issues on the platform. In June of 2018, Twitter officials began a systematic purging of locked accounts that had been marked as suspicious during their earlier review process. Twitter also made sets of information attached to these accounts, including several thousand usernames, available to the public. Divulging information on these accounts was an effort to incite research efforts on bot activity, another source for possible solutions. 

Despite these efforts, more work remains. In February of this year, roughly half-a-year since the start of Twitter’s purge, the company’s CEO shared that Twitter had still failed to do enough to counter these outstanding issues.  Addressing misinformation will continue to be an integral part of their plans moving forward, and introduction of new tools, like a reporting feature designed to mitigate the risk of spreading misinformation during political campaigns, could be useful in a health context as well. 

Conclusions

In my view, there are lessons that can be gleaned from these three social media giant’s experience with misinformation. Firstly, there is no clear, easy way to govern the spread of information on social media. These companies have long toiled with the idea of where to ‘draw the line’ on content, and what their role should be as a network for information sharing. Freedom of speech is a fundamental right, and the response to governing informational materials has to be nuanced to ensure that a balance is reached. Despite this, it is clear that a stronger form of governance is needed to protect the health and safety of this massive, international user base. It is impossible for these platforms to remain as impartial mechanisms for information going forward, and there are clear places for new interventions. Whether it be through partnerships with governments or through the establishment of a dedicated third party, a stronger effort needs to be made to address misinformation. 

Secondly, the wide horizontal integration of the top social media sites provides both a looming challenge and exciting opportunity for new intervention. Changes, or the lack there of, that Facebook decides to pursue on the governing of information on their platform’s will have effects that are far more pervasive given the span their user base across all of their applications. Getting through this block depends on strong partnership with the leaders of these major organizations. 

Lastly, and most importantly, there needs to be a dedicated effort from social media platforms to address health misinformation. Each of the social media platforms in this review seemed to produce mechanisms for addressing misinformation reactively as opposed to anticipating problems ahead of time. The emphasis on countering election-related misinformation stemmed from the deficiency identified in the 2016 election, and the new emphasis on vaccine misinformation comes from the rising incidence of vaccine preventable diseases globally. Although this effort in the health realm is encouraging for a health security professional, the bulk of these efforts are coming late into a health emergency, are not fully developed, and have consumed the attention the lion’s share of attention given to health-related misinformation. Health misinformation is diverse and the lack of specific focus on issues outside of vaccine hesitancy, like misinformation in instances of emerging infectious diseases, is especially disconcerting. Responding to health emergencies requires timely and accurate information, and delayed, incomplete responses to misinformation harms time-sensitive responses. There are opportunities for social media platforms to recognize this vulnerability, and to act in ways that would ensure the health and safety of their user base. For example, modeling a new type of ‘war room’ that has been developed to monitor international elections for emerging infectious diseases could be a valuable tool for future response efforts. Increased exploration into the risks associated with health misinformation should be a focus for social media platforms and public health researchers moving forward. Finally, it is essential that the healthcare and public health community continues to advocate for action that would address this national and international vulnerability. 

In all, addressing health related remains a challenge and probably always will. Health communicators and public health professionals have a primary role to play in this ongoing struggle, but will face a herculean task if significant changes aren’t made to the fundamental architecture of dominant social media platforms. The continued unveiling of new efforts to reduce vaccine misinformation are encouraging, however, there is a need to be proactive with efforts to address health misinformation quickly and in health realms beyond anti-vaccination. Emerging public health threats are a certainty, and perhaps so is misinformation regarding those threats. However, that also means that both are foreseeable, which may be the greatest advantage of all.