Is virus gain-of-function research a security risk?

Later today, I will be speaking at the National Academies’ “Potential Risks and Benefits of Gain-of-Function Research Symposium,” about the biosecurity risks of gain-of-function research. The US government announced on October 17 that there would be a pause in funding so-called “gain of function” (GOF) work with influenza, MERS, and SARS until the risks and benefits could be evaluated. Monday’s symposium is part of the “deliberative process wherein the National Science Advisory Board for Biosecurity (NSABB), the National Research Council (NRC) and the Institute of Medicine (IOM) consider the risks and benefits associated with research involving pathogens with pandemic potential.”

The term “gain-of-function” (or GOF) research came into the biosecurity lexicon in December 2011, after NSABB recommended that two influenza papers not be published in their entirety because there was methodological information related to increasing transmissibility of the virus that “could enable replication of the experiments by those who would seek to do harm.” The NSABB stated that they were not reacting to a specific bioterrorist threat, but that “publishing these experiments in detail would provide information to some person, organization, or government that would help them to develop similar mammal-adapted influenza A/H5N1 viruses for harmful purposes.” The NSABB recommendation led to a self-imposed moratorium on GOF influenza research, a year of discussion, new US policies for federal review and oversight of “dual-use research of concern,” a reversal of the NSABB stance, and in the end, the publication of the controversial papers.  Last summer, I wrote a full account of what happened, and the scientific, safety, security, and public health issues at stake.

One aspect I find fascinating about the GOF controversies is how long these security and safety issues have been brewing without resolution or consensus. Ten years ago, an article published in Science quoted prominent flu scientists about a controversial series of experiments which were at the time labeled “reassortment studies.” WHO’s principal flu scientist in 2004, Klaus Stöhr, felt that there were already plenty of ideas in the scientific literature which could help someone create a pandemic. Albert Osterhaus of Erasmus University conceded that with these techniques, “You could create a monster. But it’s a monster that nature could produce as well.” Richard Webby of St. Jude Children’s Research Hospital said in the article that anyone with a good knowledge of molecular biology could recreate a pandemic virus once it‘s discovered, and that “you can destroy this virus, but it will never really be gone.”

The experiments reported on in the Science article began in 2000. Now, at the close of 2014, after many deliberations and reports, we are still hotly debating what these flu experiments could mean for biosecurity.

There are some practical points worth remembering:

  • There are few true surprises. Scientific research is incremental—so there is usually not a clear line one can draw to delineate when a particular experiment poses a security risk. In the case of flu, there is a long history of experimentation and publications describing mutations that are associated with enhanced pathogenicity or transmissibility in mammals. There is also widespread, accessible, and progressively illuminating information about the techniques that could be used to produce more worrisome influenza strains. While one or two papers may be particularly concerning—the NSABB thought so in 2011—the information that could be misused for harm was already “out there.” Because so much was already known about what genetic sequences were associated with mammalian transmissibility of flu, an editorial in Nature Biotechnology compared the NSABB recommendation to redact the GOF papers to a “cat-less bag” or “horse-less stable.”
  • Helpful shortcuts for potential bioweaponeers? Let’s face it, publishing the sequence of new, potentially pandemic causing strains does lower barriers towards weaponization of the influenza virus. While a lot of information is indeed out in the world that could be used to weaponize the flu, a bioterrorist would be able to skip some tedious and technically challenging steps if the genetic sequence of a potentially pandemic-causing influenza strain were published.
  • What are the borders ? The NAS symposium is focused on GOF for Influenza, SARS, and MERS, but they are not unique in the security concerns they raise. Many experimental alterations to disease agents have the potential to be used to make weapons.  Past debates about whether to pursue or publish particular biological research studies that raised security concerns have ultimately resulted in publication, e.g., papers describing how anthrax and smallpox could potentially evade known vaccines. Many publications describe how bacterial pathogens can be made drug resistant. Added to that list are all the un-modified disease agents that are currently found in sick people, animals, and in the natural environment all over the world. All have potential to create chaos.
  • Why would someone bother? Many paths for weaponization don’t require as much R&D as GOF of influenza, SARS, or MERS would. Making a biological weapon requires more steps than just pathogen acquisition. There is the question of delivery. There is also the question of what effect they could cause.  Legitimate studies of pathogenicity and transmissibility are done in laboratory settings with animal models—and as we all know, results from those tests do not always translate  well  into human response.
  • Who are we worried about? It is difficult to assess risks of misuse of research without focusing on a particular actor (state, non-state, or individual) who could misuse it. Unfortunately, there is little agreement about whom to worry about or what their capabilities could be, which will be a challenge in assessing security risks.  There have been reports of interest in biological warfare by terrorist groups including ISIS. There are studies of the behavioral indicators of violent non-state actors which do not give hope that this problem will be less critical in the coming decade (for one example, see University of Maryland’s Unconventional Weapons and Technology group’s  Anatomizing Chemical and Biological Non-State Adversaries project). In work that my colleagues Crystal Boddie,  Matt Watson, and I will be publishing soon with Gary Ackerman at U MD, we found that there is basically no agreement among biosecurity experts even about which actors we need to fear most—state, non-state group, or well-trained individuals. 

In the end, assessing the security impact of research – the GOF work of SARS, MERS, or influenza—has to be placed in context, along with all  potential research which could theoretically be misused by a variety of actors with variable skills and resources. But it can’t be examined solely for its potential risks. The potential benefits of this work to security must also be taken into account. If the GOF work uncovers a potential strain that could cause a pandemic either naturally or deliberately, and IF that aids vaccine development, that would be a security benefit.

Given the complexities of assessing how much dual-use research of concern might lower barriers to biological weapons development (and understanding that there are particulars which make every research case different), I would hope that  emphasis is placed on performing sound  and safe science, which increases our understanding of the natural world and improves our abilities to fight disease. The line between that kind of research and a potential security impact—a positive one—is much clearer.