Understanding the Mechanics of the Pandemic Emergency Financing Facility

People buy insurance because paying a regular premium to have money available in a disaster makes budgeting easier. Large organizations are no different. They often buy insurance for lots of different catastrophes.

A few years ago, the World Bank decided to buy pandemic insurance. Nobody is currently selling pandemic insurance, so the World Bank needed to create a process to make it happen. They did this by selling catastrophe bonds, or “cat bonds.” If an investor buys cat bonds, then that investor is essentially acting as an insurance company. The holders of the cat bonds get a regular payment—kind of like an insurance premium—which gives them a return on their investment that is higher than a normal bond. But if the catastrophe happens, they lose some or all of their investment, as if they were an insurance company paying out a claim.

This system was called the Insurance Window of the World Bank’s Pandemic Emergency Financing Facility. The “insurance premiums” are paid by Germany and Japan. If the “insurance claim” is collected, the World Bank gets the money, and they decide how to use it to help respond to the pandemic. The current cat bonds last until July 2020, so the pandemic insurance will end then unless more bonds are issued.

The bond, like any insurance contract, specifies very precisely in advance the conditions under which the investors (insurers) would lose money. In order to eliminate legal uncertainty and allow the risk to be accurately priced, the triggers for the claim—that is, the definition of the pandemic—need to be very specific and numeric. The definition cannot be anything like “whenever the WHO declares an event a public health emergency of international concern (PHEIC),” because that condition is based at least in part on human judgment. If investors knew that they would lose their money whenever an organization makes a judgment call, they would not want to buy the bond.

Given the recent developments in the Congo Ebola epidemic, and the financial needs in responding to that crisis, many people are interested in understanding what exactly makes the bond trigger. What follows is my synthesis of the prospectus of the cat bond that the World Bank sold to investors. (Please note that I am not an expert in financial contracts or reading the prospectuses of cat bonds. This is not financial advice, or any claim that the bonds will or will not trigger soon. I may have missed something important about the trigger from an investor’s perspective; this is meant as my best interpretation of that prospectus regarding when the money will be available for pandemic response.)

The World Bank sold 2 kinds of cat bonds, 1 for influenza and 1 for other pandemics. The one relevant to the Ebola epidemic is the second one, the Class B note. For these notes, a pandemic is defined as an event that:

  1.      is caused by coronavirus, filovirus (Ebola), Lassa fever, Rift Valley fever, or Crimean Congo hemorrhagic fever;

  2.      kills at least 250 people;

  3.      lasts at least 12 weeks;

  4.      has at least 250 new cases in the past 12 weeks;

  5.      has an increasing average number of new cases over the past 12 weeks; and

  6.      kills at least 20 people in a second country.

If all of those conditions are met, bond holders lose some or all of their money, depending on which disease happened and how bad it was. A coronavirus or Ebola pandemic that kills 2,500 or more people means they lose everything, and the World Bank gets $95 million. If no pandemic happens, they get 11.5% a year above the risk-free rate, so the donor countries are paying about $11 million a year for this insurance.

If the current Congo Ebola outbreak had been spread over more than 1 country, the bond would have already paid out, with investors losing 60% of their money and the World Bank getting $57 million to respond to the pandemic, because the epidemic would have killed more than 750 people but fewer than 2,500 (so far). But because it was mostly confined to a single country, it did not meet all the conditions.

It is possible that the cat bonds will not pay out even if the epidemic spreads to another country and kills more than 20 people there. If the average number of new cases is stable or decreasing, then the bond will not pay out. All of the triggers must be met at once on the same date.

As of early October 2019, when this was published, the weekly number of new cases had been mostly decreasing for almost two months. This means that, even if the WHO confirmed 20 deaths in a neighboring country such as Tanzania, the money would not be available. Even if the disease develops into an endemic regional disaster that kills a regular and steady number of people for years, while the public health response must spend hundreds of millions of dollars on containment to prevent a larger disaster, the insurance money still may not become available, based on the stated terms.

Of the 4 other events that have been declared a PHEIC, the bonds would have paid out for 2 of them. They would not have paid out for the 2014 polio declaration or the 2016 Zika declaration, because those viruses are not covered in the contract. The 2014 West Africa Ebola epidemic would have triggered a full payout of the Class B notes, and the 2009 swine flu would have triggered a payout of the Class A notes.

The Class A notes will pay out $225 million for a flu epidemic that either causes at least 5,000 confirmed cases in less than 6 weeks or causes at least 5,000 confirmed cases with the number of new cases growing at over 27% a week, but only if it is a novel influenza A virus or one that has not been a seasonal flu virus in the past 35 years. There are no requirements for deaths or geographic spread; it just pays out everything, assuming that the flu in question has been the subject of a WHO report describing it. Also, if there is a coronavirus that has killed more than 2,500 people (and wiped out the Class B notes), then one-sixth of the value of the Class A notes will pay out.

Given that this is a relatively new financial instrument and that many in the pandemic preparedness community are interested in learning more about it, it may be useful for the World Bank to publish a plain-language description of the parametric triggers of both the Class A and B cat bond notes in their insurance window, which provides more information than is available in this brief summary. It may also be helpful to publish such descriptions before future cat bonds are issued in order to make sure that conditions that would trigger the release of funding are broadly understood.

Innovation in DIY Biology

The following is an interesting perspective on the practice of DIY biology from guest contributor Noga Aharony.

Less than five years ago, Will Canine came to New York City to participate in the Occupy Wall Street movement. Now, he’s the founder of OpenTrons, a laboratory automation start-up in one of the most prestigious accelerators in the world. The missing link? A biohacking bootcamp in a DIY biology laboratory in Brooklyn named Genspace.

When they formed almost 10 years ago, Genspace’s goal is to democratize biology. Now, with a fully equipped lab, they run workshops, community projects, and high school outreach programs with the goal of making biology literacy available for all.

Meanwhile, on the other side of the continent, you could find BioCurious. This lab was launched around the same time as Genspace, by members of a biotech start-up who couldn’t afford access to a traditional laboratory. They wanted to make it easier to innovate in biology. In the years since, they’ve spawned over 30 biotech start-ups.

Now, according to the DIYbiosphere, the online hub for DIY biologists, there are now over 52 community laboratories worldwide, each with its own character. They’re in Canada, Slovenia, Peru, and Bangladesh. Their focus ranges from education, to art, to innovation.

Improving Science

OpenTrons is one of many start-ups that begin rising from community laboratories. “I was inspired by the DIY biology movement’s goal to press the tools to do biotechnology into the hands of everyone, globally.” Will says, “Biotechnology has the potential to solve so many of the world’s problems – it’s a way to make food, clothing, and drugs, while maintaining a safe and less toxic environment.”

Will realized that the time it takes to master these skills is slowing down the development of life-saving innovations. He founded OpenTrons and began producing $4,000 liquid-handling machines to speed up and standardize experiments. Today, you can find their machines in the majority of top universities.

Success stories like Will’s are becoming more and more common. Innovations in DIY biology range from bioprinters that lay plant cells together, to algae that makes sustainable cloth, to bricks and foam made out of mushrooms and kits that detect the origin of the salmon on your table.

A Skilled Bunch

The idea that innovation could rise from biohackers, or amateur biologists, has been dismissed over and over in the discussion about DIY biology’s value. How could they ever exceed the rigor of corporate or academic labs? However, many of these innovations are spearheaded by trained biologists who found autonomy in the DIY scene that was unachieveable within the constraints of academia.

Most will associate growing food in space with Matt Damon’s potato garden in the Ridley Scott’s the Martian, but a postdoctoral student at the University of Edinburgh has a more refined idea. Together with two engineers, a physicist, and a Microbiologist, Máté Ravasz is constructing a Mars bioreactor - a machine capable of growing green algae as a potential food source for Astronauts.

Máté is a trained biologist with access to a lab and experience building bioreactors, but the community lab, Ascus, is still the only place where he could carry his project. “In academia, getting a publication is key, and all the resources have to be spent on that. But here, there are no incentives. I can explore ideas I would not have been able to otherwise.”

The Freedom To Explore

“Back in university, I was using fungi to grow bricks stronger than asphalt,” tells Elliot Roth. “I searched for a lab where I could work on this project, but every professor I talked to said they didn’t have the space, or that it was pointless since it wasn’t publishable, or that they didn’t have the resources.” Elliot has since found refuge in his local community lab in Richmond, Virginia. He founded Spira, where he sells kits to grow spirulina, an especially nutritious alga that grows at an unbelievable pace that was first studied in NASA. Elliot has been bringing his invention from the sky to the ground: last year, the World Food Program requested that he asses whether spirulina could enhance food security.

 “When working on a DIY project you have more freedom to think, and more chances to make mistakes than with start-ups,” says Simon Porphy, co-founder of Microsynbiotix.

A once-passion project in the BioCurious community lab, Microsynbiotix is now engineering algae into oral vaccine delivery platforms. The goal is ultimately, to create an alternative to feeding antibiotics to the fish that later end on our plates. An important service amidst growing concerns about antibiotics-resistant bacteria, which are made more common with every dose.

Open-Access Culture

Rising from the CounterCulture Labs in Oakland, California, the OpenInsulin project is devoted to making a free, open protocol for insulin production. “I was frustrated with the status quo,” says Anthony Di Franco, the head of the project and a type I diabetic. “We should be the ones controlling our treatments. Not any pharmaceutical companies.” Their goal is to give diabetics the power to be self-sufficient, and at the least, lead to the production of cheap, generic insulin by democratizing information and capabilities that only a few pharmaceutical companies have right now.

This approach to science and medicine is not unique to OpenInsulin. Open source culture initially rose in the IT industry in the form of publicly-available code that anyone can use or improve upon, but this idea has taken a stronghold in DIY biology. “But for us it goes a lot deeper,” Anthony says, “We don’t only need access to the knowledge but to the tools to do that. We need the whole pipeline and the whole feedback loop to be open.”

In a way, open science is the only viable strategy for community labs, says Maria Chavez, executive director of BioCurious. Their lab runs community science projects that anyone can attend, with meticulous, online, publicly available, lab notebooks that describes their weekly progress. “We do it for practical reasons. When you have a community project that turns into a company, it’s difficult: how do you compensate people who volunteered on the project?“

A side effect of removing barrier to biology is increased collaboration among fields. “We are kind of the matchmakers. We get scientists from different disciplines in touch with each other,” says Kenza Samlali, who runs a community lab named BricoBio in Montreal, “a few months ago, we were approached by an architect who was interested in biotechnology so that she could grow materials that she can use. She ended up doing a university-affiliated fellowship with us. Now she wants to get involved.”

These collaborations facilitate innovation by bringing together diverse teams with greater capabilities than any one member of the team. The culture of open science forms the bedrock to innovation.

Though this field is beginning to flourish, community laboratories still live in a flux. “When scientists come into BricoBio they ask, ‘what is the limit of this?’ ‘how far can I go and do a real experiment?’” says Kenza.

“We’re in the early days of trying to figure out how it works,” says Zach Mueller, co-founder of SoundBio. All of the members of the lab are volunteers, and they all have day jobs. To keep the lights on, they’re experimenting with offering educational opportunities and teaming with corporate sponsors.

The Concerns

A few years ago, a Kickstarter campaign to create a glowing plant received criticism for the potential ramifications that releasing such a plant to the wild could have. Another wave of criticism came when a start-up named the Odin began selling kits with CRISPR-Cas9, an enzyme that can theoretically be used to modify human cells’ genetic code.

Recently, the publication of an experiment detailing how to construct a previously-extinct strain of horsepox, a relative of the deadly smallpox, lead to a New York Times article expressing worries that DIY biologists will be able to use the publication as a guide to create the virus at home, making ’DIY pandemics’. In reality, only a seasoned virologist with a well-equipped lab and numerous connections could construct such a deadly virus, but nonetheless DIY biology’s drive to democratize science has instilled the fear that it will lead to the democratization of dangerous tools.

The nascence of DIY biology lead to fears that an inability to self-regulate will result in individuals pursuing unethical projects. In academic laboratories, one has to go through specific training programs before beginning to work, biosafety officers are available in case there are any issues or questions, and each project is evaluated by an Institutional Review Board that assesses its ethics.

These mechanisms are still in development in DIY biology. Dan Grushkin of Genspace and Todd Kuiken, a scholar in the University of North Carolina, are currently developing biosafety protocols and training two biosafety officers who will specialize in DIY biology using funding from the Open Philanthropy Project.

In the meantime, community laboratories are finding different ways to work with authorities and build the infrastructure needed. Many laboratories, including Genspace in New York, BUGSS in Baltimore, and BioCurious in Silicon Valley, have been in touch with FBI coordinators for years.  BioCurious has also set up their own version of an Institutional Review Board, through which every project has to be approved before it starts. “Sometimes I have to tell someone ‘that’s a great project. You can’t do that here.’” Says Maria Chavez.

Up in Seattle, SoundBio has taken a different approach. The lab’s close ties to University of Washington allowed it to model its safety after the norms in academia. “We made sure to put together a biosafety manual for our lab before we even opened the space.” When in doubt about safety, they turn to the university.

In Canada, it’s the public health authorities, namely, Health Canada, who contacts DIY biology labs as they open and offer them resources. “Health Canada has put a lot of trust in us,” Says Kenza Samlali, who runs BricoBio in Montreal. ”We don’t want to disappoint them.”

In Mexico, community laboratories require a license. Ricardo Chavez has already filed the paperwork and is currently waiting for the Environmental Agency’s approval for the first DIY biology lab. “In the beginning, it was a common misconception that getting a permit was really hard, but it wasn’t until we got in touch with the authorities that we found out that it was easy.” Ricardo was so encouraged by the interaction with government, that he now sits on the National Commission on Biosafety of Genetically Modified Organisms.

These biospaces do not work in isolation. In October this year, the MIT Community Biotechnology Initiative will be throwing the second annual conference, which will be attended by DIY biologists from all over the world. Heads of community labs in Canada, Mexico, and the United States also form ‘the network of the independent biospaces.’ The goal is to be a resource to one another and build community standards.

 “We all know that we need to have a higher standard of safety. We’re under a magnifying glass.” Says Maria from BioCurious. “We want to make sure we’re part of the discussion in biosecurity. If we know what the concerns are, we can fix them.”

Policy considerations aside, DIY biologists are full of hope. “I think that community labs have a really vital place in the world and in history,” Says Will Canine of OpenTrons, “we’re still at the early days of biotechnology, and we’re accelerating much faster than mainstream technology.”

Misinformation and Disinformation: An Increasingly Apparent Threat to Global Health Security – Part II

Since our last post, the issue of health-related mis and disinformation has continued to gain currency, particularly in light of measles outbreaks both in the US and many countries abroad. Health-related misinformation occurs organically through information sharing from ill-informed individuals, disinformation, on the other hand, is the direct result of an orchestrated effort from a nefarious actor. Although the final product of disinformation and misinformation activities is similar, focusing on the differences in their development can provide opportunities for intervention.   

Over the past decade, orchestrated disinformation campaigns have bent social media platforms to their will, mounting public pressure and forcing companies to respond. Here, we’ll dive more deeply into how several key social media companies have begun to address the presence of misinformation on the platforms, offer a preliminary assessment of the sufficiency of these efforts, and provide some additional considerations for the future. 

The Modern Social Media Landscape

Currently, six social media sites have active user bases above one billion people. Of these six, Facebook owns four (Facebook, WhatsApp, Facebook Messenger and Instagram), Google owns one (YouTube), and the Chinese company Tencent owns the other (WeChat). The next few sections will look at how three social media platforms have handled health related misinformation. 

The three platforms featured in this blog, Facebook, YouTube, and Twitter, reach global audiences and represent well-documented case studies that share similar experiences as well as unique challenges in their efforts to address mis- and disinformation.


From a content reliability standpoint, Facebook has been under mounting pressure to address disinformation following the use of their platform to spread disinformation during the 2016 U.S. presidential election. Senate hearings and excerpts from the Mueller report observed the threat that disinformation campaigns can have on a nation’s democracy and demonstrated how a foreign state may be able to extend its influence. 

Among other things, this event and the ensuing public scrutiny brought issues around the identity and responsibility of social media platforms, and the potential need for a modernized regulatory approach to those platforms, to the forefront of our national dialogue.

In 2017, Facebook officials recognized their role in the increasing prevalence of disinformation, and listed several “key areas” where they are working to address the problem: 

-      Disrupting the economic incentives of misinformation (because most false news is financially motivated)

-      Building new products (to curb the spread of false news)

-      Helping people make more informed decisions (when they encounter false news)

Facebook took actions to work toward achieving the goals of these three key areas. The social media titan has publicly shared its efforts to remove harmful accounts & groups from the platform, has altered its internal algorithm and developed AI to try and deter the dissemination of false information, and has injected large amounts of money into journalistic efforts designed to build an informed public. Some metrics show that the site’s efforts to date to stall the spread of misinformation may have resulted in a small, yet positive, impact, and public statements from CEO Mark Zuckerberg  show that they now are proactively calling for regulation that could curb harmful content on the internet.

While these efforts address several key issues in the battle against mis and disinformation, it is fair to question how immediate and thorough the impact of the actions will be. Facebook has dedicated a large portion of its effort on addressing misinformation during democratic elections, a trend also seen with other social media platforms. And, while it is obvious that protection of global democracies should be a top priority, it raises concerns that issues like health-related misinformation may fall to the wayside.  

Facing rising measles cases and press attention highlighting Facebook’s role in growing anti-vaccination sentiments, the platform released a plan to ‘combat vaccine misinformation’. This was one of the company’s first public plans that directly addressed a health misinformation issues, and although it is a positive step in the direction of solving the problem, the scope of the effort does not match the scale of the vaccine misinformation challenge, or the entire swath of health related misinformation that flourish on their site. The steps that Facebook made in their plan to address vaccine misinformation are rationale and are well-intentioned, but, as referenced above, the problem has persistedbeyond their initial efforts. It is encouraging to see this leader in social media directly address this challenge, but they must continue to refine their efforts if they wish to make a meaningful impact on their site. Moving forward, Facebook should:

-      Continue their ongoing efforts to address misinformation. To its credit, Facebook does acknowledge that this problem will require a continuous and committed effort, and that the problem is not yet solved.

-      Develop groups, like their election war rooms, that have a primary focus on identifying and stopping the spread of obvious health-related misinformation. Due to the speed at which information moves through social media, a proactive approach is likely to be more effective than a reactive one.

-      Ensure that the future direction of the social media platform considers the implication of misinformation dissemination as a key feature in their decision making process. 


Most social media platforms have acknowledged that their algorithms play a role in the spread of misinformation, but it’s important to recognize that there are degrees to how efficiently this occurs. YouTube in particular has long been known for its tendency to direct viewers down a rabbit hole of related content. This is designed to engage a viewer, keeping them on the platform consuming both new content and advertisements. Unfortunately, there is documentation that this algorithm may be particularly good at spreading conspiratorial-minded content, a dangerous mechanism for the spread of misinformation. As far back as 2007, researchers noted that the site had become a ‘breeding ground’ for misinformation. Despite these early warning signs, however, YouTube took few definitive actions to counter this challenge.  

In March of 2018, YouTube CEO, Susan Wojcicki sat down for a long interview at the annual South by Southwest conference in Austin, Texas. She shared views that stressed the importance of free information and emphasized that false information was only a minimal portion of YouTube’s portfolio. The interview addressed questions about the platform’s algorithm, their role in an increased presence of radicalization, and what they would do moving forward.

Following that interview, YouTube pursued a number of different actions to address misinformation on their platform. In July, they invested $25 million into efforts to integrate trusted sources of news onto their site.  In August they joined other social media sites in the removal of Alex Jones’ conspiratorial materials, and, finally, in January of this year, they made changes to their algorithm to stem the spread of  ‘borderline content’. This was strategically timed with Google’s publication of their own plan to fight disinformation, a detailed document that contained sections pertaining specifically to YouTube. It seemed that YouTube may have finally succumbed to the pressure to address this long-simmering problem, and some analysts have shared cautiously optimistic thoughts regarding their new policies.

Regrettably, YouTube’s history as a dissemination mechanism for misinformation extends into the realm of public health. YouTube hosts a panoply of questionable health related content, ranging from miracle cures to plastic surgery, and were a key player in the spread of conspiracy theories regarding vaccine safety. In early 2019, evidence surfaced that YouTube’s algorithm was suggesting anti-vax materials during videos sharing valuable vaccine information, prompted the platform to demonetize videos sharing this harmful information. This was a step in the right direction, but another example of how action to address health misinformation required a heightened sense of public awareness to stir action. This is disconcerting considering that the spread of health-related misinformation on low profile issues may continue to go unaddressed.  

YouTube has a long road ahead of them, the last few months have featured several steps in the right directions. Like the other social media titans of their day there needs to be a heightened sense of self-awareness and a better understanding of the platforms role in public discourse to ensure that they are providing safe information to the public. 


In the fall of 2018, a group of researchers outlined a disinformation campaign targeting individuals engaging in debates around vaccine safety on Twitter. As was the same with Facebook and YouTube, Twitter faced issues with the design of their information sharing algorithms. If anything could be deemed a common denominator in the role that social media has played in the widespread dissemination of misinformed content, it is sharing algorithms being leveraged in unintended ways. 

Twitter’s was built as a forum for public interaction on a myriad of different topics. Public posting, short messages, and hashtags pull users together into a streamlined conversation. Twitter’s algorithm is designed to encourage this phenomenon, and is successful at igniting conversations on singular events or issues. While there are a number of positive applications for this format, it has frequently been coopted for nefarious purposes. The same mechanisms built to bring people together, are now being manipulated to drive groups further apart. Bots are an efficient tool for inundating a topic with tailored messages, and trolls have encountered little resistance in their efforts to ‘spam’ and harass those with opposing views.  It is not surprising that these two devices were the primary perpetrators in the previously referenced disinformation campaign, and that they have been the main target for Twitter’s actions to curb misinformation. 

In the early summer of 2017, Twitter’s VP of Public Policy, Colin Crowell, posted an article on the their  blog titled “Our Approach to Bots and Misinformation.” This article was notable for its admission that the site had some responsibility for the presence and dissemination of misinformation online, and provided some detail on possible actions that the company would take to mitigate these risks in the future.

Twitter’s leadership announced their  ‘new approach’ later in March, a multi-faceted plan incorporating diverse review mechanisms designed to identify accounts actively contributing to an unhealthy information atmosphere. Following this announcement the platform laid out a number of different policies including a new requirement that accounts be linked to a phone number or email, updating their algorithm, and even shifting the appearance of de-identified accounts away from their classic egg avatar. These changes were summarized in an official Twitter article that came out almost one year after their initial commitment to addressing some of the platform’s looming issues.

Shortly after this article was posted, Twitter began its arguably most aggressive action to address issues on the platform. In June of 2018, Twitter officials began a systematic purging of locked accounts that had been marked as suspicious during their earlier review process. Twitter also made sets of information attached to these accounts, including several thousand usernames, available to the public. Divulging information on these accounts was an effort to incite research efforts on bot activity, another source for possible solutions. 

Despite these efforts, more work remains. In February of this year, roughly half-a-year since the start of Twitter’s purge, the company’s CEO shared that Twitter had still failed to do enough to counter these outstanding issues.  Addressing misinformation will continue to be an integral part of their plans moving forward, and introduction of new tools, like a reporting feature designed to mitigate the risk of spreading misinformation during political campaigns, could be useful in a health context as well. 


In my view, there are lessons that can be gleaned from these three social media giant’s experience with misinformation. Firstly, there is no clear, easy way to govern the spread of information on social media. These companies have long toiled with the idea of where to ‘draw the line’ on content, and what their role should be as a network for information sharing. Freedom of speech is a fundamental right, and the response to governing informational materials has to be nuanced to ensure that a balance is reached. Despite this, it is clear that a stronger form of governance is needed to protect the health and safety of this massive, international user base. It is impossible for these platforms to remain as impartial mechanisms for information going forward, and there are clear places for new interventions. Whether it be through partnerships with governments or through the establishment of a dedicated third party, a stronger effort needs to be made to address misinformation. 

Secondly, the wide horizontal integration of the top social media sites provides both a looming challenge and exciting opportunity for new intervention. Changes, or the lack there of, that Facebook decides to pursue on the governing of information on their platform’s will have effects that are far more pervasive given the span their user base across all of their applications. Getting through this block depends on strong partnership with the leaders of these major organizations. 

Lastly, and most importantly, there needs to be a dedicated effort from social media platforms to address health misinformation. Each of the social media platforms in this review seemed to produce mechanisms for addressing misinformation reactively as opposed to anticipating problems ahead of time. The emphasis on countering election-related misinformation stemmed from the deficiency identified in the 2016 election, and the new emphasis on vaccine misinformation comes from the rising incidence of vaccine preventable diseases globally. Although this effort in the health realm is encouraging for a health security professional, the bulk of these efforts are coming late into a health emergency, are not fully developed, and have consumed the attention the lion’s share of attention given to health-related misinformation. Health misinformation is diverse and the lack of specific focus on issues outside of vaccine hesitancy, like misinformation in instances of emerging infectious diseases, is especially disconcerting. Responding to health emergencies requires timely and accurate information, and delayed, incomplete responses to misinformation harms time-sensitive responses. There are opportunities for social media platforms to recognize this vulnerability, and to act in ways that would ensure the health and safety of their user base. For example, modeling a new type of ‘war room’ that has been developed to monitor international elections for emerging infectious diseases could be a valuable tool for future response efforts. Increased exploration into the risks associated with health misinformation should be a focus for social media platforms and public health researchers moving forward. Finally, it is essential that the healthcare and public health community continues to advocate for action that would address this national and international vulnerability. 

In all, addressing health related remains a challenge and probably always will. Health communicators and public health professionals have a primary role to play in this ongoing struggle, but will face a herculean task if significant changes aren’t made to the fundamental architecture of dominant social media platforms. The continued unveiling of new efforts to reduce vaccine misinformation are encouraging, however, there is a need to be proactive with efforts to address health misinformation quickly and in health realms beyond anti-vaccination. Emerging public health threats are a certainty, and perhaps so is misinformation regarding those threats. However, that also means that both are foreseeable, which may be the greatest advantage of all. 

Misinformation and Disinformation: An Increasingly Apparent Threat to Global Health Security

By Marc Trotochaud and Matthew Watson

In the broadest possible conception, communication is a system that allows humans to share information with one another. Words and images can shape how we perceive information, a factor that plays a large role in our decision-making processes. Messages can incite emotion, provoke dialogue, and, albeit rarely, shift one’s self-perception or understanding of their relationship with those around them. At our core, humans are social beings, and communication is the natural product of that reality.

In public health, researchers have spent decades studying the best way to use communication to prompt protective health behaviors. Vast numbers of academic studies continuously add to the pool of professional knowledge, and it is the prerogative of health communicators to efficiently relay new information to populations of interest. In health security, communication is a major factor in how we plan for and respond to threats that can impact large populations.

Over the past fifteen years, some aspects of the practice of communication have changed dramatically. The rapid accessibility of mobile devices and the rise of social media platforms (Facebook, Twitter, Instagram, SnapChat, and others) has created a very noisy information landscape, which presents new challenges for public health practitioners and health security professionals. 

Of these challenges, misinformation and disinformation propagation has commanded the public spotlight over the past few years and has significantly damaged the global informational landscape. This two-part post will detail the impact of health-related misinformation and disinformation, its effects on health security, and the potential for addressing this issue in the near future.  

How Did We Get to This Point? 

Misinformation and disinformation are not new, but it is clear that the changes in how people consume information has catalyzed their production. Traditionally, people received their information through interpersonal interactions or via a traditional media channels like TV, radio, or print. The conglomerates that produced news content had guidelines in place that let them play a gatekeeper role for public information. These guidelines were far from a perfect system, but there was a universal understanding of how this system worked, which established a familiarity with how information traveled.   

This understanding changed dramatically when social media platforms made their way into the informational landscape. These platforms were created to bring people closer together. They offered unmoderated content creation and the ability to easily access and share information. Regardless of the intricacies of each specific platform, they each shared a common goal of inducing interaction between users. In effect, they were building a network of individual two-way communication channels on a massively expanded scale. 

As the user bases for these sites grew, this two-way vision transformed into what’s come to be known as a “many-to-many” communication system. While many argue the technicalities of the title, there is agreement on this core principle: many individuals now have the ability to post information to many people, at any time and with limited regulation. Now multiple voices speak to a number of topics, and with a click of a button, any individual can share their personal thoughts with the world. This non-direct, two-way communication system has pivoted who shares and receives information, flipping the script on information seeking. With social media’s rapid introduction into the technological sphere and the fast adoption of a many-to-many system, the tradition gatekeepers of information quickly became outdated, and the lack of a coordinated effort to adjust opened a window for misinformation production.  

During this same time period, advances in mobile technology created the perfect mechanism for personalizing these new platforms, increasing the amount of time that users interacted with them. The opportunity to access social media at a moment’s notice has expedited its growth, and led to the hyper-connected society we live in today. 

Social media is not the sole cause of this ‘post truth era’, but its outsized role is undeniable. This new information landscape has changed the traditional model of information sharing with its full impacts still not completely known. 

Misinformation and Disinformation: The Bad and The Worse

To best understand the impact of misinformation and disinformation propagation, it is important to acknowledge the fundamental difference between the two: intent. Simply put, misinformation is wrong or misconstrued information. It can stem from any number of sources, and has been a common plight for centuries. It is not purposefully shared with the knowledge that it is incorrect, and generally, its drive is not malicious. Disinformation, on the other hand, is incorrect information shared for that very reason. This distinction in intent is often hard to ascertain in real time but is critical in how one approaches its correction. 

Modern disinformation campaigns are a particularly virulent strain of propaganda. When paired with powerful social media platforms, disinformation activities have the ability to spread quickly with increased reach. These activities have become the subject of recent controversy, and have spawned multiple federal investigations. The case that has elicited the greatest response was an alleged campaign fueled by Russian trolls during the 2016 presidential election. The court proceedings tied to this case brought about massive social media purges, and unearthed the presence of foreign companies running disinformation campaigns around the world. While these efforts are certainly a pressing political and national security issue, a recent analysis clearly demonstrated that there is a connection to health communication as well. 

This past August, the American Journal of Public Health published an article that outlined a disinformation campaign that used programmed “bots” and online trolls to purposefully muddy the waters and rile up controversy between those who advocate for routine vaccination and those who oppose it. The strategic aim of this particular disinformation campaign was to use public health as a wedge issue, and to fan the flames of societal discord. The perpetrators of what the authors deemed “weaponized health communication” carried out their mission internationally through multiple social media channels. While the investigators did not attribute the campaign to a person or state, a sizeable portion of these trolls and bots were Russian accounts. 

What this article demonstrates is concerning. Public health has been, and continues to be, a topic that foregoes political differences for the betterment of health and wellbeing. Targeting health issues with the intent to divide is contradictory of the uniting nature fundamental to the discipline. There is danger present when health topics are used to drive people apart, and now there is a clear example of that being the case. Identification of these disinformation campaigns is just the starting point for an uncertain future, and it’s clear that immediate action is needed to address this growing concern. The next step will be moving from retrospective identification to a more engaged, proactive messaging posture. 

The Impact on Health Security and Health Communication

Misinformation and disinformation have both direct and indirect effects on the field of health security. Of these, the impact of incorrect information in the decision-making process seems most apparent. In the event of a disaster or emergency, timely communication of accurate information can be a major component in saving lives. The misinformation atmosphere can complicate this directly by sharing information that isn’t true. The 2014 Ebola epidemic, for example, was the victim of viral rumors that impacted how people perceived their risk of disease. In future disasters, people seeking information will now have to engage more actively with a growing amount of available material, or accept the potential that the information is wrong. 

The growing amount of available material highlights one of the indirect impacts stemming from false information propagation: there is an ever-growing amount of false information online, and there’s evidence that says people may give more attention to it than the truth. Both true and false stories are vying for attention, and some are finding that it’s more difficult for their messages to stick out. The product of these developments is a massive amount of available information, all fighting to be seen. Target audiences now face an ‘information overload’, prompting them to take mental shortcuts in how they select information. How people take these shortcuts has been the subject of decades of psychosocial research, serving as the backbone for theories that try and determine their influence on individual decision making. Many speculate that this new information acquisition process has been the driving factor in producing pockets of individuals where incorrect ideas may widely be accepted as truth. Now, the challenge for health communication practitioners is not just sharing information, but rather, doing so while simultaneously persuading diverse audiences that science reflects the truth

In addition to these challenges, an underlying growth in public distrust is a disconcerting development for health communicators and health security professionals. In recent years, this phenomenon has become increasingly well documented and has frustrated professionals from a wide range of disciplines. Portions of the public will deny overwhelming empirical evidence, whether it’s aimed towards climate change or vaccine efficacy, in favor of information that supports their beliefs – so called ‘confirmation bias’. Studies have shown that this gap persists across various audiences, presenting a disconcerting outlook for communication and health security. There have always been people who express skepticism of the scientific method, but the widespread vocal nature of modern dissenters presents a particularly lively challenge that will be harder to address. 

It seems all but certain that misinformation and disinformation propagation will be a challenge for future health communication efforts. Health communication is and will continue to be a critical component to health security, adding increased pressure to finding a solution for this problem. There is no clear best path forward, and the next steps we take will determine the impact of messaging efforts in this permanently altered communication realm. 

We will explore the wide-range of potential options in the second half of this blog, Misinformation and Disinformation Propagation: What Now?

The role of NGOs in global health security: A conversation with Tausi Suedi

By Nick Alexopulos

Tausi Suedi, CEO and executive director of Childbirth Survival International (photo by Larry Canner)

Tausi Suedi, CEO and executive director of Childbirth Survival International (photo by Larry Canner)

On July 30, the Johns Hopkins Center for Health Security convened more than 60 experts to gather input and recommendations for the forthcoming U.S. Global Health Security Strategy, a document that will codify U.S. support for the Global Health Security Agenda. Among the many discussion topics—disease surveillance, laboratory diagnostics, workforce development, emergency management, antimicrobial resistance, and more—was the role nongovernmental organizations (NGOs) play in overall global health security, and how to ensure those organizations are meaningfully included in an interagency U.S. strategy.

Tausi Suedi, MPH, championed this cause in her questions and comments throughout the meeting. She is the CEO and executive director of Childbirth Survival International, a grassroots nonprofit advocating for maternal and newborn health in the Sub-Saharan African countries of Tanzania, Uganda, Nigeria, Ghana, and Somalia. Suedi is also an adjunct professor of global health at Towson University. 

After the Center’s event, the Bifurcated Needle spoke with Suedi about NGOs’ contributions to the GHSA: 

What major point did you communicate to the group, and what key message were you hoping to hear from your fellow experts and panelists?

The Global Health Security Agenda requires partnership and collaboration, especially with grassroots nonprofits that are actually implementing some of these packages. When you look at our nonprofit, Childbirth Survival International, we particularly focus on some of those action packages. For example, workforce development, immunizations, and making sure healthcare workers on the front lines are being trained to quickly recognize irregularities and act quickly if they identify a threat. 

What I was hoping to hear and what I think I did hear was the U.S. government’s commitment to continue engaging with [low resource] countries in order to strengthen their healthcare systems. As we all know, many systems are still inadequate, especially as you move from the urban to the rural areas—and so a lot more effort is needed. And that needs to be a concerted effort. Of course the U.S. government is a major player in this and very well recognized in its role, and what I heard from experts in the room is that the United States is on track to continue making those changes in the world.

How does the GHSA benefit from the work of NGOs?

We bring extraordinary value because we’re at the grassroots. If you look at the GHSA, how it’s structured, a vital component of its mission is to actually respond to a threat. It’s there, waiting; if something happens, let’s go. But the NGOs, we’re already on the ground working every single day, building the health systems, training the healthcare workers, educating the communities, getting families to immunize their kids, and working on other factors to prevent disease. We’re doing this work constantly, and like GHSA we’re responding to an emergency at the particular moment when it happens.

Your organization works in five Sub-Saharan African countries. What does the GHSA mean for them?

They will benefit tremendously from GHSA efforts to strengthen their healthcare systems, which still rely a lot on donor funding and international NGOs. With this collaboration of the U.S. government and international NGOs working together on this GHSA package, you’re bound to find countries improving with strengthened healthcare systems. 

Now, some countries are part of GHSA and others are not. Somalia, for instance, is one of those fragile countries, and one of the countries my organization serves. For it to be part of this GHSA consortium, a lot more work is needed to build its healthcare system and health infrastructure.

Final thought?

As an American, as an African, as a woman leader, I think we’re doing great work to improve health around the world. But I think we should not lose sight of what makes this happen: focusing at the grassroots level, the community level, where there is the most hurt.


A summary meeting report is forthcoming and will be available on the Center's website.

Help us map the synthetic genomics industry

Synthetic genomics may be the most common dual-use biotechnology today. The ability to construct double-stranded DNA from scratch enables a better understanding of protein structure and function, and the development of new vaccines, speeding up the process of biological engineering. However, these technologies also have the potential to allow people with nefarious intentions access to toxins and pathogens that would be otherwise difficult to acquire.

With each passing year, synthetic biologists are becoming more adept at designing novel structures and functions from DNA, RNA, and proteins—the basic building blocks of biology. The Central Dogma of molecular biology is that DNA is transcribed into RNA, which is then translated into proteins (4). Proteins then perform a variety of functions inside and outside the cells. They can join together to build the cytoskeleton of the cell, break down molecules to produce the energy the cell needs, and much more. However, proteins can be also used to synthesize toxins such as cyanide salts and aflatoxins. Proteins can also themselves be toxins, such as ricin or Botulinum toxin.

These approaches even confer the ability to create viruses from scratch. In the past, DNA synthesis was a key step of the de novo (i.e., from scratch) synthesis of poliovirus, the 1918 influenza virus, and most recently horsepox virus. While the synthesis of an infectious virus requires a high degree of technical expertise, access to DNA was a bottleneck. For a ‘booted’ virus to be infectious, its synthesized DNA must have as few errors as possible. While benchtop synthesizers make it easier to synthesize double-stranded DNA without having to order a sequence from a gene synthesis company, this method typically leads to too many errors to make a long genetic sequence with high enough accuracy.

This potential threat can be reduced if gene synthesis providers screen their orders. Therefore, in 2010, the US department of Health and Human Services published the Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA. This guidance recommends that companies screen both the customer and sequence of any gene synthesis order to ensure its legitimacy (13). However, since the publication of the HHS guidance, the gene synthesis industry has quadrupled in size, and the number of providers doubled. This once US-based industry is now growing, and is projected to keep growing, particularly in the Asia-Pacific region, where it was almost absent when the HHS guidance was written (14).

Here at the Johns Hopkins Center for Health Security we are currently mapping the gene synthesis industry in order to understand which changes are necessary for the future of the HHS guidance. We are searching for gene synthesis companies and cataloguing them based on their laboratory locations, the reach of their shipments, and the breadth of their screenings. We have published here a work-in-progress map with the hope of receiving feedback from the public and gene synthesis companies and ensuring the information we collected is correct.

If you would like to add a gene synthesis company to this map, or if you can verify information about a gene synthesis company’s laboratory locations, shipping, or screening protocols, please email Noga Aharony at naharon1@jhu.edu. We are assembling this information for publication, and will be making further recommendations regarding gene synthesis order screening.

Moving the needle on infectious disease control investment

In mid-June, the National Academies of Science, Engineering, and Medicine held a workshop called “Understanding the Economics of Microbial Threats”, bringing together economic and public health subject matter experts to discuss the economics of infectious disease emergencies. Discussion topics were diverse, ranging from preparing for the next pandemic to tackling antimicrobial resistance.

An inability to control a deadly outbreak substantially affects regional and global stability. The 2014 Ebola outbreak cost resource-constrained Sierra Leone, Guinea, and Liberia almost $3 billion, and contributed to longer term reductions in GDP.  Outbreaks burden communities both through associated direct costs of preventing and treating illness, and the resulting longer term reduced labor productivity and health consequences.

Dr. Tom Inglesby, our Center’s director, was a workshop panelist and described challenges and important considerations for optimizing responses to global catastrophic biological risks (GCBRs). An ideal response to these large-scale pandemics is multifaceted, requiring substantial planning, stockpile maintenance, non-pharmaceutical interventions (e.g., closing schools), and flexibility to account for unique pathogen attack or mortality rates. Strengthening resources to address GCBRs is critical. Though scientists and governments have historically focused on other catastrophic risks like nuclear threats, the consequences of inadequately preparing for the next pandemic could be immense, as we demonstrated in the Center’s recent Clade X exercise.

Despite consensus in the health economics community that infection control is important, these messages often do not resonate with other key stakeholders. As Workshop Chair Dr. Peter Sands articulated, economic policymakers rarely fully consider the financial burden of microbial threats. A contributing factor to this, Dr. Martin Meltzer explained, is that future pandemics are inevitable but unpredictable, appearing anywhere between once every 10 to 60 years. This time range makes the necessary investment in infection control often unpalatable to policymakers who prefer shorter-term solutions with clear outputs that can be achieved within terms or election cycles. Furthermore, the way in which modeling results are sometimes communicated can sometimes backfire. Some speakers cited that “trillion-itis”, or the tendency for modelling results to express potential findings in terms of billions and trillions, can make these issues appear too challenging to address. Communicating economic findings in compelling, transparent, and easily digestible ways is critical.

One of the most discussed topics was antimicrobial resistance (AMR). A clear threat to global health security, AMR has received increasing attention from governments, industry, and academia-- but finding solutions remains daunting.  A key issue is that pharmaceutical investment in antibiotics is generally not as profitable as other drugs because antimicrobial prescriptions are usually for acute issues and are restricted to reduce future drug resistance. Executives from major pharmaceutical stakeholders including Merck, Pfizer, and the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA) discussed potential strategies they believe could address this issue.  While many government-led incentives are designed to “push” industry to increase investment through initial R&D benefits, speakers cited the need for “pull incentives” that enable companies to view antibiotic development as a truly profitable long-term venture. Similar suggestions were stated in a recent World Economic Forum report. Ensuring market exclusivity, fostering public-private partnerships such as CARB-X, or structuring reimbursement mechanisms so that profits aren’t based on total use were also proposed as important future directions. Conversely, some participants acknowledged that solutions to incentivize industry can be politically challenging because they often result in resource-limited governments paying for-profit companies more for their products. Ensuring transparency, trust, and empathy for the complexities of these issues were cited as important considerations to tackle AMR. 

The valuable discussions and research presented in this two-day workshop served as an instrumental stepping stone for future progress in understanding and addressing the economic issues of infectious disease.

House Energy & Commerce Subcommittee Hearing on Public Health and Biopreparedness: Observations

Strengthening our national health security has been an enduring, bipartisan objective of the federal government for many years. Prompted in part by infectious disease threats like Ebola, Zika and this past year’s particularly severe seasonal influenza, Congress has used the reauthorization of a key piece of health security legislation as a moment to take stock. The Oversight and Investigations Subcommittee of the House Committee on Energy and Commerce recently called a hearing on the federal government’s ability to respond to natural and intentional infectious disease threats. The conversation mostly centered around pandemic influenza and the development of medical countermeasures for biological attacks.

There were four expert witnesses representing the primary responding HHS components:

  • Dr. Rick Bright – Director of BARDA and Deputy Assistant Secretary of ASPR
  • Dr. Anne Schuchat – Principal Deputy Director, CDC
  • Dr. Anthony Fauci – Director, NIH NIAID
  • Rear Admiral Denise Hinton – Chief Scientist, FDA

There were several important comments and opinions shared during the hearing, but the main takeaways points that I gleaned were as follows:

  1. There was a significant amount of focus on preparing for an influenza pandemic, particularly in terms of detection and response. There were a lot of questions directed towards the NIH and FDA representatives on this topic. They both emphasized the importance of inter-agency collaboration to speed up the clinical trial process.
  2. There is bipartisan support for the reauthorization of the Pandemic and all Hazards Preparedness Act (PAHPA), which will provide funding and resources to combat emerging infectious disease threats.
  3. Concern was expressed by committee members over the transfer of the Strategic National Stockpile (SNS) from the CDC to the Assistant Secretary for Preparedness and Response (ASPR). Representatives from both agencies reassured committee members of ongoing communication efforts to ensure a smooth transition.

The transfer of the SNS was a topic of concern because of the outsized importance of the SNS during public health emergencies. The SNS is a critical component of the U.S. preparedness response efforts. Proper management of the SNS is necessary for the rapid and organized distribution of medical countermeasures to the effected populations. Witnesses from ASPR and CDC reaffirmed the dedication of their respective organizations to ensure a smooth transfer with minimal consequences of new management. Dr. Bright said, “We have several working groups working very close with CDC and ASPR to evaluate various components of the stockpile transfer.” This sentiment was confirmed by Dr. Schuchat who followed up, “We are well on the way to a seamless transition.”

Many questions about the development of medical countermeasures and getting those products rapidly in to the market were directed towards the witnesses from NIAID and the FDA. Dr. Fauci of NIAID made several references to the promising avenue of multiplex point-of-care diagnostic tests that are capable of detecting multiple different viruses in one test with one sample. These tests would revolutionize response efforts to disease outbreaks, particularly in resource-poor settings. Dr. Fauci seemed enthusiastic about the technology, saying that “multiplex is a very important tool of the future now for detecting outbreaks.” Dr. Fauci also discussed Phase II trials for an Ebola vaccine and ongoing work to develop a universal influenza vaccine; although he admitted that such a product is still years away.

Re-authorizing PAHPA would be an important step in increasing the capacity of the United States to protect its own citizens and the global community from infectious disease threats. The original act was signed into law in 2006 and was re-authorized in 2013. A current reauthorization is being undertaken by Congress under the title of the Pandemic and All-Hazards Preparedness and Advancing Innovation Act (PAHPAI). PAHPAI will continue to support important preparedness efforts such as funding the development and stockpiling of vaccines, therapeutics and medical devices that will be needed during an emergency and enabling local, state, and federal public health agencies to rapidly respond to infectious disease emergencies. In his opening statements, Chairman Mr. Harper said, “Passage of PAHPA’s reauthorization would not only provide critical certainty for public health agencies and industry partners, it would also bring about some much needed reforms.” In general, committee members appeared supportive of re-authorizing PAHPA.

To Identify Pandemic Pathogens, Diagnose Every Case

Pandemic Pathogens.PNG

Of the roughly 1,400 bacterial, viral, protozoan, and fungal pathogens that are known to infect people, only a few have demonstrated the potential to cause a “sudden, extraordinary, widespread disaster beyond the collective capability of national and international governments and the private sector to control” – what my colleagues and I define as ‘Global Catastrophic Biological Risks’ (GCBRs). The members of this infamous club include some of humanity’s greatest scourges, including plague, smallpox, and pandemic influenza. While the current threat posed by these pathogens has been attenuated somewhat due to the modern infectious disease armamentarium (e.g., basic sanitation and hygiene, vaccines, and modern medical care), these and other pathogens nevertheless have the potential to cause mass death and the chaos and suffering that would inevitably follow.

In response, governments and health authorities have attempted to bound the problem by compiling both formal and informal lists of the pathogens most likely to cause severe epidemics or pandemics. Notable examples include the WHO’s R&D Blueprint and the CDC’s Bioterrorism Agents list. While these lists serve important planning or regulatory functions, they also can inhibit a comprehensive understanding of the biological threat landscape. As the 2009 emergence of the pandemic H1N1 influenza virus in Mexico (rather than Southeast Asia) and the importation of Zika virus to the Americas demonstrate, surprise has been the norm.

We believe this stems, in part, from list-based thinking. That’s why we were so pleased to see WHO include “Disease X” in this year’s update of the R&D Blueprint, as a reminder of the importance of constant vigilance and preparedness. This theme will feature prominently in the Clade X tabletop exercise that our Center is conducting this week.

For the past several months, my colleagues and I have been working to identify some common characteristics of pandemic pathogens. We hope our findings will spur more nuanced assessments of biological threats. I encourage you to check out our recently released final report.

There is one major finding from that report I want to highlight here.

The fact is, the vast majority of illnesses and deaths from infectious causes are never definitively diagnosed. This is true regardless of where in the world care is rendered. Instead, clinicians primarily rely on constellations of signs and symptoms, what are called ‘syndromes’, to whittle down the list of things that could conceivably cause the illness. While far less labor intensive than tracking each and every case down to its root cause, this aspect of medical practice makes it far more likely that index cases or clusters of known or unknown pathogens will go unnoticed.

We believe that clinicians the world over should be making routine use of classical clinical microbiology, clinical applications of next-generation sequencing, and point-of-care molecular diagnostics that are starting to become available. That information, once gleaned, should be rapidly and seamlessly transmitted to public health authorities. This improved flow of information would dramatically boost our ability to identify pandemic pathogens in a timely fashion.

Modernizing epidemiology through outbreak science

By Caitlin Rivers, PhD, MPH

I’m excited to announce that with support from the Open Philanthropy Project, my colleagues and I at the Johns Hopkins Center for Health Security will be working over the next eighteen months to develop a plan to develop an Outbreak Science Initiative to support the US government in responding to infectious disease outbreaks. The program would formally integrate the nation’s top disease outbreak scientists into federal response operations, where they could produce the forecasts, models, and analyses that decision makers need to allocate resources, compare interventions, and assess progress on outbreak containment. This capability would improve our ability to respond to outbreaks quickly and effectively.

We coin the term “outbreak science” to mean a subfield of epidemiology that uses infectious disease modeling, data science and visualization, and modern data practices for outbreak response. The goal of outbreak science is to connect public health decision makers with the most current data and analytics necessary to determine how best to contain outbreaks. Although this type of expertise has been influential in several major epidemics, it is often tapped by response officials in sporadic, ad hoc, and pro bono partnerships. There is currently no formal mechanism for public health officials to reliably and quickly access experts who can produce the models and analyses necessary to inform decision making.

The use of outbreak science during the 2014-2015 Ebola response is illustrative of the value of outbreak science. One influential model published by the Centers for Disease Control and Prevention forecast a worst-case scenario of more than one million cases if the epidemic continued unabated. It’s widely acknowledged that the CDC model galvanized the international response that ultimately contributed to control of the epidemic. However, CDC is one of just two groups in government with embedded outbreak science expertise. Most of the other models used during the outbreak, including those used to forecast case counts and monitor containment, were produced by academics with no formal connection to the response. They worked without guidance about the public health questions that needed answering, without official data sets, and without compensation. They also had to put their results in academic journals instead of in the hands of decision makers.

Conversely, decision makers without outbreak science support had no choice but to act without full analysis of the current and future state of the outbreak. This lack of adequate situational awareness potentially contributed to the late identification of funerals as superspreading events, and to the overdue surge of hospital beds. The disconnect between public health decision makers and modeling expertise limited the timeliness and applicability of most of the models produced during the Ebola outbreak, and reduced the effectiveness of the response. An outbreak science program would aim to close this critical gap in future public health events by formally integrating the best outbreak scientists into outbreak response operations to enable faster control of epidemics.