2016 Gets its Very Own Bio-lapse

In mid-December, USA Today published an article by Alison Young entitled Emergency trainees mistakenly exposed to deadly ricin. In it, she reports that more than 9,600 trainees at the Federal Emergency Management Agency’s (FEMA) Center for Domestic Preparedness (CDP) located in Anniston, Alabama had been unintentionally exposed to a lethal form of ricin during a series of training exercises spanning a five year period. In response to this revelation, FEMA administrator Craig Fugate has already called for an investigation by DHS’s inspector general, and CDP has suspended all trainings involving toxic exposures.

Most importantly, there is no indication that anyone became ill following exposure to the ricin used during training, and students wearing proper personal protective equipment while working with the toxin. That’s the good news. It would have been a cruel irony for anyone to be harmed simply while honing their skills for this rewarding but uniquely dangerous calling. What follows are some initial reactions to this story.

Some background is in order. CDP is where the nation’s police officers, firefighters, emergency medical services providers, emergency managers, and healthcare workers gain the particular knowledge and experience needed to respond to a range of crises, including those of a CBRN nature. State and local responding agencies can send staff to Anniston for highly specialized training courses, such as the infection control and clinical course offered to US-based healthcare providers who deployed to West Africa to contribute to the Ebola response effort in 2014-15.

Second, a few observations about ricin. From the Center’s fact sheet:

Ricin toxin or ricin, as it is more commonly known, is a protein that consists of A and B subunits that can be extracted from the beans of the castor plant, Ricinus communis…The toxic effects of ricin are caused by its ability to inhibit protein synthesis. Ricin can be introduced to the body through inhalation of an aerosol, or through ingestion, injection, or infusion.

The mosaic nature of ricin’s composition is important to understanding what had apparently been going on at Anniston for the past 5 years. Per FEMA’s statements in the USA Today article, they thought their students were working with a powdered preparation of ricin’s A chain protein, which would have been much safer to work with while still generating positive results by environmental detection assays. Here is where things get (semi) interesting.

There are actually two chemically distinct lectins produced in castor beans, ricin and Ricinus communis Agglutinin (abbreviated “RCA”), which is significantly less toxic than ricin. To add to the confusion, at least one naming convention designates the whole ricin toxin “RCA60”. One might wonder whether CDP staff saw that they were receiving a product labeled “RCA” and interpreted that to mean “Ricin Chain A.”

Needless to say, there should be a thorough investigation in this case, as the question of responsibility appears to have devolved into a finger pointing exercise between FEMA and the contractors responsible for providing the agency with the product they ordered.

Regardless of confusion over names, labels, purchase orders, and intentions, many people may quite reasonably be wondering why first responders should have anything to do with ricin toxin in the first place. The answer relates to a worrisome but under-appreciated trend in the post 9/11 era: the skyrocketing occurrence of white powder incidents. First responders are called to thousands of these events per year, the vast majority of which events turn out to be hoaxes. But, every once in a while, a bored college student, romantic rival, or would-be assassin figures out how to formulate and use at least a crude preparation of the real thing. As a result, every suspect powder has to be treated as potentially harmful until field-based detectors, confirmed by laboratory diagnostic tests, indicate otherwise.  Make no mistake, the high frequency of these white powder incidence makes bio-detection as vital for some local first responders as CPR.

In order to be sure that responders can handle these white powder incidents safely, there is no substitute for rigorous training. To protect themselves and the public, local first responders have to be able to discriminate fake threats from real ones. The myriad detection technologies available to first responders vary significantly in their ease of use, sensitivity, specificity and turn-around-time. In other words, they can be difficult to use properly, and thus the need for training with live agents – to ensure that equipment works properly and first responders can accurately run them and interpret the results.

Unavoidably, there is the issue of optics. Ms. Young understandably links the Anniston incident with prior examples of biosafety lapses by federal biodefense programs. Let’s review. Last year it was live samples of anthrax inadvertently mailed by a Department of Defense lab (a story that may actually say as much about the incredibly hardy biology of B. anthracis as it does the sufficiency of inactivation protocols). The year before that, it was the unexpected discovery of viable variola virus in an FDA freezer on NIH’s campus in Bethesda, MD. In that same year, CDC made headlines with an unexpected exposure of staff to B. anthracis. So… not great. And now, 2016 has a bio-lapse of its very own.

Once you get past the headlines though, it’s less clear what should be done about that less-than-stellar track record. In each instance, the value of the underlying program to national and global health security is beyond question. What’s more, each of those incidents took place in very different operational and organizational contexts, so a one-size-fits-all policy fix isn’t likely to materialize. Finally, because of the amount of agent CDP was working with (reportedly less than 70mg), they were not recognized as a regulated entity under the CDC’s Select Agent Program, this instance should not be interpreted as a regulatory failure.

That said, it is not my intention to downplay the significance of this unusually long running event. Had a health impact occurred resulting from exposure to a pathogen or toxin, the results could have been tragic. Additionally, should the public or their legislative representatives begin to perceive more risk than benefit from federal biodefense programs, their continued existence could be called into question, to the detriment of our national health security.

What this series of unfortunate events underlines is the need for continued, systemic commitment to - and flawless execution of - biosafety and biosecurity practice at every governmental agency engaged in these efforts. In addition, the field could probably benefit from an increased level of scientific inquiry into how to enhance biosafety at the institutional and national levels. Whether or not those steps are taken before the next lapse takes place is an open question. 

Take-aways from the National Academies’ “Gain of Function” Meeting

From: CDC Public Health Image Library

From: CDC Public Health Image Library

On Monday and Tuesday I attended the National Academies’ (NAS) Potential Risks and Benefits of Gain-of-Function Research symposium held in Washington DC. The meeting was part of a US government process initiated in October that established a funding moratorium on federally funded experiments that “may be reasonably anticipated to confer attributes to influenza, MERS, or SARS viruses such that the virus would have enhanced pathogenicity and/or transmissibility in mammals via the respiratory route.”

That process also includes a review of this research by the National Science Advisory Board for Biosecurity (NSABB), and the completion of an independent Risk Assessment, supported by NIH.  (Marc Lipsitch and I published a paper in mBio this week with our views on the moratorium and the risk assessment.) The purpose of this week’s NAS meeting was to provide insights to the NIH as it considers the potential risks and benefits of this area of research.

The meeting was informative and is worth watching on the website for those with interest in these issues.  Here are 6 observations I would make coming out of the meeting:

Focus of concern - It seemed that there was a broadly held concern in the meeting regarding research that can reasonably be anticipated to create a novel respiratory virus that is both highly pathogenic as well as transmissible in a mammalian model which can predict human transmissibility. While there was no explicit voting on that issue and it was not a unanimous view, the level of shared concern was encouraging to me and suggests the potential for common ground and a path forward in the discussion. At the same time, though, some of those in the meeting who seemed to have that shared concern also seemed to continue to be supportive of the work undertaken in the last few years to engineer highly pathogenic strains of H5N1 to become transmissible in ferrets. It seems to me that these two views are inconsistent. In any event, judgments about the benefit and risks around this very circumscribed area of research are at the crux of this discussion.      

Stop using the term “GOF” - The term “Gain of Function (GOF)” is overly broad in that it includes scientific work that is valuable and does not pose serious dangers. The term GOF was first applied to this very specific research area in December 2012 in an NIH meeting on the issue. It may have been useful shorthand at the time, but it has created problems now by sweeping many types of experiments under the umbrella of the current moratorium and potential future actions in this area. “GOF” is much broader than the working definition of what I think is the area of great concern as per #1 above i.e.:  the creation of novel respiratory viruses that are both highly pathogenic as well as transmissible in mammals that are predictive of humans. For this reason, we should stop describing this work as GOF and use more precise language. Some, including Lipsitch and I, have used the term Potential Pandemic Pathogens (PPP) to describe the research area of concern. Whatever alternative language is used, it is time to change the way we discuss this.

Work that should resume - As per the comments made by prominent coronavirus scientists (Ralph Baric and Mark Denison) at the NAS meeting, no coronavirus scientists are working (or planning to propose to do work) that would create a novel coronavirus (MERS, SARS, or other) that would be both pathogenic in humans and transmissible in a mammalian model that would predict human transmissibility. Given that is the case, and given that the moratorium has led to a larger scale interruption of coronavirus research that is unrelated to PPP, it seems reasonable and prudent to end the moratorium on coronavirus related work. There is also a realm of flu vaccine development that relies on GOF techniques that do not result in the creation of PPP – that work, too, should be allowed to go forward. At the present, PPP related work would seem to be confined to highly pathogenic avian influenza research intended to confer mammalian transmissibility in ferrets (the surrogate model for human transmissibility).   

Relevance to vaccines? - Proponents of influenza PPP work described the potential benefits of the work. There is no disagreement that it can contribute to fundamental knowledge and basic science discovery. But there continued to be division in the room regarding the assertions of its other benefits, including the assertion that influenza PPP work is important in the creation of vaccines and medicines. For example, Robert Lamb said that it was time “to get rid of the mantra that GOF is necessary to make vaccines and medicines”. It was great to have Phil Dormitzer there from Novartis. But as good as he is, he was still the only one at the meeting from the vaccine industry. Given the centrality of the question of PPP relevance to vaccine and therapeutic development, more scientists from vaccine and therapeutic companies should to be part of this debate.

International implications - I thought a comment made by Barbara Johnson is worth underscoring. She said (I am paraphrasing) that other countries will follow the lead of the USG on this. If NIH funds PPP work, other countries will also pursue this work, and some of them will not be able to perform it with the same biosafety controls in place as the labs that have been funded to do this work by NIH. I think this is particularly concerning.

Accountability - We heard from Rob Weyant of CDC that if someone deliberately stole or misused one of the research pathogens at hand, they would be pursued by law enforcement consistent with the law.  But what did not get addressed in the meeting discussion is the issue of accountability should there be an accident with a PPP strain and then epidemic spread ensued. Is the scientist accountable? The institution? The local or federal government? And what would accountability mean in that case?

Finally, the panel discussion on Models of Risk Benefit Assessment was particularly useful because it helped drill into what may and may not be reasonable expectations for the risk assessment ahead. In terms of meeting participants, I was surprised by the relative paucity of leading scientists that were not directly involved in influenza and coronavirus research. Given the consequences of this debate, I would have hoped that more top scientists outside of these fields would have been invited and participated. I hope they are present in larger numbers in the debate in the time ahead. Similarly, the point was made a number of times at the meeting that there has been insufficient public input into this deliberative process. For an issue of such import, I hope that broader public engagement will occur, through inclusion at subsequent meetings and via other means of gauging public sentiment and commentary on this issue.     

Is virus gain-of-function research a security risk?

Later today, I will be speaking at the National Academies’ “Potential Risks and Benefits of Gain-of-Function Research Symposium,” about the biosecurity risks of gain-of-function research. The US government announced on October 17 that there would be a pause in funding so-called “gain of function” (GOF) work with influenza, MERS, and SARS until the risks and benefits could be evaluated. Monday’s symposium is part of the “deliberative process wherein the National Science Advisory Board for Biosecurity (NSABB), the National Research Council (NRC) and the Institute of Medicine (IOM) consider the risks and benefits associated with research involving pathogens with pandemic potential.”

The term “gain-of-function” (or GOF) research came into the biosecurity lexicon in December 2011, after NSABB recommended that two influenza papers not be published in their entirety because there was methodological information related to increasing transmissibility of the virus that “could enable replication of the experiments by those who would seek to do harm.” The NSABB stated that they were not reacting to a specific bioterrorist threat, but that “publishing these experiments in detail would provide information to some person, organization, or government that would help them to develop similar mammal-adapted influenza A/H5N1 viruses for harmful purposes.” The NSABB recommendation led to a self-imposed moratorium on GOF influenza research, a year of discussion, new US policies for federal review and oversight of “dual-use research of concern,” a reversal of the NSABB stance, and in the end, the publication of the controversial papers.  Last summer, I wrote a full account of what happened, and the scientific, safety, security, and public health issues at stake.

One aspect I find fascinating about the GOF controversies is how long these security and safety issues have been brewing without resolution or consensus. Ten years ago, an article published in Science quoted prominent flu scientists about a controversial series of experiments which were at the time labeled “reassortment studies.” WHO’s principal flu scientist in 2004, Klaus Stöhr, felt that there were already plenty of ideas in the scientific literature which could help someone create a pandemic. Albert Osterhaus of Erasmus University conceded that with these techniques, “You could create a monster. But it’s a monster that nature could produce as well.” Richard Webby of St. Jude Children’s Research Hospital said in the article that anyone with a good knowledge of molecular biology could recreate a pandemic virus once it‘s discovered, and that “you can destroy this virus, but it will never really be gone.”

The experiments reported on in the Science article began in 2000. Now, at the close of 2014, after many deliberations and reports, we are still hotly debating what these flu experiments could mean for biosecurity.

There are some practical points worth remembering:

  • There are few true surprises. Scientific research is incremental—so there is usually not a clear line one can draw to delineate when a particular experiment poses a security risk. In the case of flu, there is a long history of experimentation and publications describing mutations that are associated with enhanced pathogenicity or transmissibility in mammals. There is also widespread, accessible, and progressively illuminating information about the techniques that could be used to produce more worrisome influenza strains. While one or two papers may be particularly concerning—the NSABB thought so in 2011—the information that could be misused for harm was already “out there.” Because so much was already known about what genetic sequences were associated with mammalian transmissibility of flu, an editorial in Nature Biotechnology compared the NSABB recommendation to redact the GOF papers to a “cat-less bag” or “horse-less stable.”
  • Helpful shortcuts for potential bioweaponeers? Let’s face it, publishing the sequence of new, potentially pandemic causing strains does lower barriers towards weaponization of the influenza virus. While a lot of information is indeed out in the world that could be used to weaponize the flu, a bioterrorist would be able to skip some tedious and technically challenging steps if the genetic sequence of a potentially pandemic-causing influenza strain were published.
  • What are the borders ? The NAS symposium is focused on GOF for Influenza, SARS, and MERS, but they are not unique in the security concerns they raise. Many experimental alterations to disease agents have the potential to be used to make weapons.  Past debates about whether to pursue or publish particular biological research studies that raised security concerns have ultimately resulted in publication, e.g., papers describing how anthrax and smallpox could potentially evade known vaccines. Many publications describe how bacterial pathogens can be made drug resistant. Added to that list are all the un-modified disease agents that are currently found in sick people, animals, and in the natural environment all over the world. All have potential to create chaos.
  • Why would someone bother? Many paths for weaponization don’t require as much R&D as GOF of influenza, SARS, or MERS would. Making a biological weapon requires more steps than just pathogen acquisition. There is the question of delivery. There is also the question of what effect they could cause.  Legitimate studies of pathogenicity and transmissibility are done in laboratory settings with animal models—and as we all know, results from those tests do not always translate  well  into human response.
  • Who are we worried about? It is difficult to assess risks of misuse of research without focusing on a particular actor (state, non-state, or individual) who could misuse it. Unfortunately, there is little agreement about whom to worry about or what their capabilities could be, which will be a challenge in assessing security risks.  There have been reports of interest in biological warfare by terrorist groups including ISIS. There are studies of the behavioral indicators of violent non-state actors which do not give hope that this problem will be less critical in the coming decade (for one example, see University of Maryland’s Unconventional Weapons and Technology group’s  Anatomizing Chemical and Biological Non-State Adversaries project). In work that my colleagues Crystal Boddie,  Matt Watson, and I will be publishing soon with Gary Ackerman at U MD, we found that there is basically no agreement among biosecurity experts even about which actors we need to fear most—state, non-state group, or well-trained individuals. 

In the end, assessing the security impact of research – the GOF work of SARS, MERS, or influenza—has to be placed in context, along with all  potential research which could theoretically be misused by a variety of actors with variable skills and resources. But it can’t be examined solely for its potential risks. The potential benefits of this work to security must also be taken into account. If the GOF work uncovers a potential strain that could cause a pandemic either naturally or deliberately, and IF that aids vaccine development, that would be a security benefit.

Given the complexities of assessing how much dual-use research of concern might lower barriers to biological weapons development (and understanding that there are particulars which make every research case different), I would hope that  emphasis is placed on performing sound  and safe science, which increases our understanding of the natural world and improves our abilities to fight disease. The line between that kind of research and a potential security impact—a positive one—is much clearer.