The problem of horsepox synthesis: new approaches needed for oversight and publication review for research posing population-level risks

By Tom Inglesby

In summer 2017, a team of Canadian scientists revealed that they had synthesized the horsepox virus in a lab at the University of Alberta, and planned to publish their research. I and others expressed strong concern about the work at that time, and we have opposed its publication. 

Today, the science journal PLOS ONE published this research online, so it is now globally available. The horsepox researchers acknowledged that this work may lower the bar for other scientists interested in synthesizing smallpox. Before its eradication from nature in 1980, smallpox was humanity’s greatest infectious disease killer. Almost none of the global population in 2018 has effective immunity, and smallpox vaccine supplies are available only to protect a small fraction of the world. There are only two declared smallpox repositories, one in the United States and one in Russia. Any research that reduces the challenges of synthesizing smallpox de novo outside these repositories—as this work does—should be off limits unless, perhaps, there were to be extraordinary benefits that make the risks worth taking.

This work does not carry such extraordinary benefits. As Greg Koblenz has articulated in his paper on this issue, there is not a compelling case that governments will need or use this virus to develop a new and improved smallpox vaccine. Another justification made by the horsepox researchers was that it is important to demonstrate the feasibility of synthesizing smallpox de novo. But creating a new extraordinary risk (i.e., instructions for how to simplify smallpox synthesis) to show that the risk is legitimate is a dangerous path. In any event, relevant members of the science community have widely agreed that smallpox synthesis has been technically feasible for many years now. What this new research does is show the global scientific community how to synthesize orthopox viruses in an efficient way, to overcome technical challenges, and to employ techniques developed by one of the leading orthopox labs in the world.

Now, with this research published and accessible to the world, those of us who are deeply concerned about it should consider what is needed to prevent future events with such potential harmful impact. This horsepox synthesis research work has exposed serious flaws in how governments oversee research that has profound potential population-level adverse consequences. The University of Alberta research team admitted that regulatory authorities “may not have fully appreciated the significance of, or potential need for, regulation or approval of” their work. Clearly, fundamental changes need to be made. 

The most important locus of control should be whether specific research is approved and funded in the first place. When scientists are considering the pursuit of research that has the potential to increase highly consequential national population-level risks, national authorities and leading technical experts on those issues should be part of the approval process. When there are highly consequential international population-level implications, international public health leaders should also be involved. When researchers put forth claims about potential benefits of this work to justify extraordinary risks, those claims ought not be accepted without discussion; those claims should instead be examined by disinterested experts with the expertise to validate or refute them.

If research posing potential population-level risks does get performed without such high level national or international scrutiny, or without a disinterested examination of the benefits, publishers should have clear guidance from governments and a process for engaging with governments in the decision-making process regarding publication. That kind of system is not in place in the United States or elsewhere. Journals are now often on their own, in some cases with the help of a dual-use research committee that is comprised of individual scientists who may or may not have full understanding of the potential risks or the claims of benefits of the underlying research being published.   

We need to turn what we’ve learned from this damaging situation into actionable policies aimed at strengthening preparedness and global health security. The first step is changing the oversight and approval process for experiments that have potential to create highly consequential population-level risks. We also need a coinciding publication review system for such research with the scientific and government input necessary to avoid publishing research that increases risks to global populations.   

Important questions global health and science leaders should be asking in the wake of horsepox synthesis

The publication of the experimental work that synthesized horsepox is imminent, according to multiple reports. Horsepox no longer exists in nature, so this was the creation of an extinct virus in the same genus as smallpox. It doesn’t infect people, but causes pox disease in horses. Researchers have cited several objectives for the work, including the intention to develop it as a smallpox vaccine, the intention to develop it as a virus-based cancer treatment, and the intention to show that synthesizing smallpox de novo is possible.

The work raises a number of serious questions and concerns, partly about the specifics of the work and partly about what this says about biosecurity and biosafety considerations related to a circumscribed set of experiments.

The first question is whether experimental work should be performed for the purpose of demonstrating something potentially dangerous and destructive could be made using biology. In this case, horsepox was created in the laboratory, at least in part to show that synthesis de novo of smallpox virus is feasible. In this specific case, leading virologists have agreed for many years that de novo smallpox synthesis was scientifically feasible, and there has been no serious counterargument that it was not feasible. But the important decision going forward is whether research with high biosafety or biosecurity risks should be pursued with a justification of demonstrating that something dangerous is now possible. I don’t think it should. Creating new risks to show that these risks are real is the wrong path. At the very least, it’s a serious issue needing serious discussion.  

A second question that is more relevant to this experiment is how much new detail will be provided in the forthcoming publication regarding how to construct an orthopox virus. It is one thing to create the virus; it’s another thing altogether to publish prescriptive information that would substantially lower the bar for creating smallpox by others. The University of Alberta lab where the horsepox construction took place is one of the leading orthopox laboratories in the world. They were technically able to navigate challenges and inherent safety risks during synthesis. Will labs that were not previously capable of this technical challenge find it easier to make smallpox after the experiment methodology is published? 

A third question relates to the approval process for experimental work with implications for international biosecurity or international biosafety. The researchers who did this work are reported to have gone through all appropriate national regulatory authorities. Researchers who created horsepox have said that the regulatory authorities “may not have fully appreciated the significance of, or potential need for, regulation or approval of” this work. So while work like this has potential international implications – it would be a bad development for all global citizens if smallpox synthesis becomes easier because of what is learned in this publication – the work is reviewed by national regulatory authorities without international norms or guidelines that direct them. This means that work considered very high risk and therefore rejected by one country may be approved by others. 

In the case of the horsepox experiment in Canada, the Advisory Committee on Variola Virus Research at WHO was briefed on the work after it was completed. Moreover, the primary charge of that committee is actual smallpox research itself (as opposed to horsepox). Beyond that, this WHO committee is unique. WHO does not have special disease by disease committees that review work on a case by case basis for other pathogens.

I think the new P3CO policy published by the White House in January 2017 could be a good step forward in the US regarding future policy development for experiments involving potential pandemic pathogens. Whether and how that policy will be implemented remains to be seen since it is guidance for federal agencies but does not require their action. Importantly in this case, even if this policy had been implemented in the US, it doesn’t seem that the policy would have had bearing on the horsepox research had that been proposed in the US. So even as the US has spent a substantial amount of time considering these kinds of issues, it still doesn’t have policy (or high-level review committees) that directly considers experiments like this. Beyond that, there is no international component to P3CO. There clearly needs to be an international component to these policies. We need agreed upon norms that will help guide countries and their scientists regarding work that falls into this category, and high-level dialogue regarding the necessary role of scientific review, guidance, and regulation for work that falls into special categories of highest concern. It is not clear that these considerations are now even being discussed internationally.   

The rapid advance of biology in the world overall will continue to have enormous health and economic benefits for society. The entrepreneurial and unpredictable nature of biological research, now coupled with powerful global markets, is overwhelmingly positive for the world. But this case of horsepox synthesis shows us that there are also specific and serious challenges that require special attention now.  

Tom Inglesby, MD, is the director of the Johns Hopkins Center for Health Security

Take-aways from the National Academies’ “Gain of Function” Meeting

From: CDC Public Health Image Library

From: CDC Public Health Image Library

On Monday and Tuesday I attended the National Academies’ (NAS) Potential Risks and Benefits of Gain-of-Function Research symposium held in Washington DC. The meeting was part of a US government process initiated in October that established a funding moratorium on federally funded experiments that “may be reasonably anticipated to confer attributes to influenza, MERS, or SARS viruses such that the virus would have enhanced pathogenicity and/or transmissibility in mammals via the respiratory route.”

That process also includes a review of this research by the National Science Advisory Board for Biosecurity (NSABB), and the completion of an independent Risk Assessment, supported by NIH.  (Marc Lipsitch and I published a paper in mBio this week with our views on the moratorium and the risk assessment.) The purpose of this week’s NAS meeting was to provide insights to the NIH as it considers the potential risks and benefits of this area of research.

The meeting was informative and is worth watching on the website for those with interest in these issues.  Here are 6 observations I would make coming out of the meeting:

Focus of concern - It seemed that there was a broadly held concern in the meeting regarding research that can reasonably be anticipated to create a novel respiratory virus that is both highly pathogenic as well as transmissible in a mammalian model which can predict human transmissibility. While there was no explicit voting on that issue and it was not a unanimous view, the level of shared concern was encouraging to me and suggests the potential for common ground and a path forward in the discussion. At the same time, though, some of those in the meeting who seemed to have that shared concern also seemed to continue to be supportive of the work undertaken in the last few years to engineer highly pathogenic strains of H5N1 to become transmissible in ferrets. It seems to me that these two views are inconsistent. In any event, judgments about the benefit and risks around this very circumscribed area of research are at the crux of this discussion.      

Stop using the term “GOF” - The term “Gain of Function (GOF)” is overly broad in that it includes scientific work that is valuable and does not pose serious dangers. The term GOF was first applied to this very specific research area in December 2012 in an NIH meeting on the issue. It may have been useful shorthand at the time, but it has created problems now by sweeping many types of experiments under the umbrella of the current moratorium and potential future actions in this area. “GOF” is much broader than the working definition of what I think is the area of great concern as per #1 above i.e.:  the creation of novel respiratory viruses that are both highly pathogenic as well as transmissible in mammals that are predictive of humans. For this reason, we should stop describing this work as GOF and use more precise language. Some, including Lipsitch and I, have used the term Potential Pandemic Pathogens (PPP) to describe the research area of concern. Whatever alternative language is used, it is time to change the way we discuss this.

Work that should resume - As per the comments made by prominent coronavirus scientists (Ralph Baric and Mark Denison) at the NAS meeting, no coronavirus scientists are working (or planning to propose to do work) that would create a novel coronavirus (MERS, SARS, or other) that would be both pathogenic in humans and transmissible in a mammalian model that would predict human transmissibility. Given that is the case, and given that the moratorium has led to a larger scale interruption of coronavirus research that is unrelated to PPP, it seems reasonable and prudent to end the moratorium on coronavirus related work. There is also a realm of flu vaccine development that relies on GOF techniques that do not result in the creation of PPP – that work, too, should be allowed to go forward. At the present, PPP related work would seem to be confined to highly pathogenic avian influenza research intended to confer mammalian transmissibility in ferrets (the surrogate model for human transmissibility).   

Relevance to vaccines? - Proponents of influenza PPP work described the potential benefits of the work. There is no disagreement that it can contribute to fundamental knowledge and basic science discovery. But there continued to be division in the room regarding the assertions of its other benefits, including the assertion that influenza PPP work is important in the creation of vaccines and medicines. For example, Robert Lamb said that it was time “to get rid of the mantra that GOF is necessary to make vaccines and medicines”. It was great to have Phil Dormitzer there from Novartis. But as good as he is, he was still the only one at the meeting from the vaccine industry. Given the centrality of the question of PPP relevance to vaccine and therapeutic development, more scientists from vaccine and therapeutic companies should to be part of this debate.

International implications - I thought a comment made by Barbara Johnson is worth underscoring. She said (I am paraphrasing) that other countries will follow the lead of the USG on this. If NIH funds PPP work, other countries will also pursue this work, and some of them will not be able to perform it with the same biosafety controls in place as the labs that have been funded to do this work by NIH. I think this is particularly concerning.

Accountability - We heard from Rob Weyant of CDC that if someone deliberately stole or misused one of the research pathogens at hand, they would be pursued by law enforcement consistent with the law.  But what did not get addressed in the meeting discussion is the issue of accountability should there be an accident with a PPP strain and then epidemic spread ensued. Is the scientist accountable? The institution? The local or federal government? And what would accountability mean in that case?

Finally, the panel discussion on Models of Risk Benefit Assessment was particularly useful because it helped drill into what may and may not be reasonable expectations for the risk assessment ahead. In terms of meeting participants, I was surprised by the relative paucity of leading scientists that were not directly involved in influenza and coronavirus research. Given the consequences of this debate, I would have hoped that more top scientists outside of these fields would have been invited and participated. I hope they are present in larger numbers in the debate in the time ahead. Similarly, the point was made a number of times at the meeting that there has been insufficient public input into this deliberative process. For an issue of such import, I hope that broader public engagement will occur, through inclusion at subsequent meetings and via other means of gauging public sentiment and commentary on this issue.     

Is virus gain-of-function research a security risk?

Later today, I will be speaking at the National Academies’ “Potential Risks and Benefits of Gain-of-Function Research Symposium,” about the biosecurity risks of gain-of-function research. The US government announced on October 17 that there would be a pause in funding so-called “gain of function” (GOF) work with influenza, MERS, and SARS until the risks and benefits could be evaluated. Monday’s symposium is part of the “deliberative process wherein the National Science Advisory Board for Biosecurity (NSABB), the National Research Council (NRC) and the Institute of Medicine (IOM) consider the risks and benefits associated with research involving pathogens with pandemic potential.”

The term “gain-of-function” (or GOF) research came into the biosecurity lexicon in December 2011, after NSABB recommended that two influenza papers not be published in their entirety because there was methodological information related to increasing transmissibility of the virus that “could enable replication of the experiments by those who would seek to do harm.” The NSABB stated that they were not reacting to a specific bioterrorist threat, but that “publishing these experiments in detail would provide information to some person, organization, or government that would help them to develop similar mammal-adapted influenza A/H5N1 viruses for harmful purposes.” The NSABB recommendation led to a self-imposed moratorium on GOF influenza research, a year of discussion, new US policies for federal review and oversight of “dual-use research of concern,” a reversal of the NSABB stance, and in the end, the publication of the controversial papers.  Last summer, I wrote a full account of what happened, and the scientific, safety, security, and public health issues at stake.

One aspect I find fascinating about the GOF controversies is how long these security and safety issues have been brewing without resolution or consensus. Ten years ago, an article published in Science quoted prominent flu scientists about a controversial series of experiments which were at the time labeled “reassortment studies.” WHO’s principal flu scientist in 2004, Klaus Stöhr, felt that there were already plenty of ideas in the scientific literature which could help someone create a pandemic. Albert Osterhaus of Erasmus University conceded that with these techniques, “You could create a monster. But it’s a monster that nature could produce as well.” Richard Webby of St. Jude Children’s Research Hospital said in the article that anyone with a good knowledge of molecular biology could recreate a pandemic virus once it‘s discovered, and that “you can destroy this virus, but it will never really be gone.”

The experiments reported on in the Science article began in 2000. Now, at the close of 2014, after many deliberations and reports, we are still hotly debating what these flu experiments could mean for biosecurity.

There are some practical points worth remembering:

  • There are few true surprises. Scientific research is incremental—so there is usually not a clear line one can draw to delineate when a particular experiment poses a security risk. In the case of flu, there is a long history of experimentation and publications describing mutations that are associated with enhanced pathogenicity or transmissibility in mammals. There is also widespread, accessible, and progressively illuminating information about the techniques that could be used to produce more worrisome influenza strains. While one or two papers may be particularly concerning—the NSABB thought so in 2011—the information that could be misused for harm was already “out there.” Because so much was already known about what genetic sequences were associated with mammalian transmissibility of flu, an editorial in Nature Biotechnology compared the NSABB recommendation to redact the GOF papers to a “cat-less bag” or “horse-less stable.”
  • Helpful shortcuts for potential bioweaponeers? Let’s face it, publishing the sequence of new, potentially pandemic causing strains does lower barriers towards weaponization of the influenza virus. While a lot of information is indeed out in the world that could be used to weaponize the flu, a bioterrorist would be able to skip some tedious and technically challenging steps if the genetic sequence of a potentially pandemic-causing influenza strain were published.
  • What are the borders ? The NAS symposium is focused on GOF for Influenza, SARS, and MERS, but they are not unique in the security concerns they raise. Many experimental alterations to disease agents have the potential to be used to make weapons.  Past debates about whether to pursue or publish particular biological research studies that raised security concerns have ultimately resulted in publication, e.g., papers describing how anthrax and smallpox could potentially evade known vaccines. Many publications describe how bacterial pathogens can be made drug resistant. Added to that list are all the un-modified disease agents that are currently found in sick people, animals, and in the natural environment all over the world. All have potential to create chaos.
  • Why would someone bother? Many paths for weaponization don’t require as much R&D as GOF of influenza, SARS, or MERS would. Making a biological weapon requires more steps than just pathogen acquisition. There is the question of delivery. There is also the question of what effect they could cause.  Legitimate studies of pathogenicity and transmissibility are done in laboratory settings with animal models—and as we all know, results from those tests do not always translate  well  into human response.
  • Who are we worried about? It is difficult to assess risks of misuse of research without focusing on a particular actor (state, non-state, or individual) who could misuse it. Unfortunately, there is little agreement about whom to worry about or what their capabilities could be, which will be a challenge in assessing security risks.  There have been reports of interest in biological warfare by terrorist groups including ISIS. There are studies of the behavioral indicators of violent non-state actors which do not give hope that this problem will be less critical in the coming decade (for one example, see University of Maryland’s Unconventional Weapons and Technology group’s  Anatomizing Chemical and Biological Non-State Adversaries project). In work that my colleagues Crystal Boddie,  Matt Watson, and I will be publishing soon with Gary Ackerman at U MD, we found that there is basically no agreement among biosecurity experts even about which actors we need to fear most—state, non-state group, or well-trained individuals. 

In the end, assessing the security impact of research – the GOF work of SARS, MERS, or influenza—has to be placed in context, along with all  potential research which could theoretically be misused by a variety of actors with variable skills and resources. But it can’t be examined solely for its potential risks. The potential benefits of this work to security must also be taken into account. If the GOF work uncovers a potential strain that could cause a pandemic either naturally or deliberately, and IF that aids vaccine development, that would be a security benefit.

Given the complexities of assessing how much dual-use research of concern might lower barriers to biological weapons development (and understanding that there are particulars which make every research case different), I would hope that  emphasis is placed on performing sound  and safe science, which increases our understanding of the natural world and improves our abilities to fight disease. The line between that kind of research and a potential security impact—a positive one—is much clearer.