The problem of horsepox synthesis: new approaches needed for oversight and publication review for research posing population-level risks

By Tom Inglesby

In summer 2017, a team of Canadian scientists revealed that they had synthesized the horsepox virus in a lab at the University of Alberta, and planned to publish their research. I and others expressed strong concern about the work at that time, and we have opposed its publication. 

Today, the science journal PLOS ONE published this research online, so it is now globally available. The horsepox researchers acknowledged that this work may lower the bar for other scientists interested in synthesizing smallpox. Before its eradication from nature in 1980, smallpox was humanity’s greatest infectious disease killer. Almost none of the global population in 2018 has effective immunity, and smallpox vaccine supplies are available only to protect a small fraction of the world. There are only two declared smallpox repositories, one in the United States and one in Russia. Any research that reduces the challenges of synthesizing smallpox de novo outside these repositories—as this work does—should be off limits unless, perhaps, there were to be extraordinary benefits that make the risks worth taking.

This work does not carry such extraordinary benefits. As Greg Koblenz has articulated in his paper on this issue, there is not a compelling case that governments will need or use this virus to develop a new and improved smallpox vaccine. Another justification made by the horsepox researchers was that it is important to demonstrate the feasibility of synthesizing smallpox de novo. But creating a new extraordinary risk (i.e., instructions for how to simplify smallpox synthesis) to show that the risk is legitimate is a dangerous path. In any event, relevant members of the science community have widely agreed that smallpox synthesis has been technically feasible for many years now. What this new research does is show the global scientific community how to synthesize orthopox viruses in an efficient way, to overcome technical challenges, and to employ techniques developed by one of the leading orthopox labs in the world.

Now, with this research published and accessible to the world, those of us who are deeply concerned about it should consider what is needed to prevent future events with such potential harmful impact. This horsepox synthesis research work has exposed serious flaws in how governments oversee research that has profound potential population-level adverse consequences. The University of Alberta research team admitted that regulatory authorities “may not have fully appreciated the significance of, or potential need for, regulation or approval of” their work. Clearly, fundamental changes need to be made. 

The most important locus of control should be whether specific research is approved and funded in the first place. When scientists are considering the pursuit of research that has the potential to increase highly consequential national population-level risks, national authorities and leading technical experts on those issues should be part of the approval process. When there are highly consequential international population-level implications, international public health leaders should also be involved. When researchers put forth claims about potential benefits of this work to justify extraordinary risks, those claims ought not be accepted without discussion; those claims should instead be examined by disinterested experts with the expertise to validate or refute them.

If research posing potential population-level risks does get performed without such high level national or international scrutiny, or without a disinterested examination of the benefits, publishers should have clear guidance from governments and a process for engaging with governments in the decision-making process regarding publication. That kind of system is not in place in the United States or elsewhere. Journals are now often on their own, in some cases with the help of a dual-use research committee that is comprised of individual scientists who may or may not have full understanding of the potential risks or the claims of benefits of the underlying research being published.   

We need to turn what we’ve learned from this damaging situation into actionable policies aimed at strengthening preparedness and global health security. The first step is changing the oversight and approval process for experiments that have potential to create highly consequential population-level risks. We also need a coinciding publication review system for such research with the scientific and government input necessary to avoid publishing research that increases risks to global populations.   

The Devil You Know vs. The Devil You Fear

During Ebola, a consistent concern was that healthcare workers could bring the pathogen home because of suboptimal infection control. This concern formed part of the rationale for sequestering patients with Ebola-like symptoms at designated facilities adept at infection control—a protocol I strongly endorse.

However, the idea that only headline-grabbing contagions can be transmitted by healthcare workers gives rise to a serious threat misperception. Consider these statistics:

It is not a stretch to assume that a proportion of healthcare workers are also colonized with carbapenem-resistant enterobacteraciae and multi-drug resistant Acinetobacter (which contaminated 9% of gloved healthcare workers hands in one study). And you know how well we all wash our hands.

These individuals serve as vectors for household and community transmission of these epidemiologically significant organisms that kill thousands annually. Yet when it becomes public knowledge that someone has cared for an Ebola patient, panic ensues, which results in political pressure to implement non-evidenced based interventions, like excluding the children of healthcare workers from school.

This kind of threat misperception hampers every outbreak response as public health authorities act increasingly to placate a panicked public and their elected officials, yet fail to place risk in its proper context.

In reality, the bigger threat to the human race is not Ebola or some exotic virus lurking on the hands of the nurse or doctor dropping off their child at daycare or school (which hopefully has high vaccination rates), but rather the usual suspects. that have proven, time and again, to be much more prolific threats to the human race.  While my aim is not to incite panic over what might be on healthcare workers’ hands, I do believe a little dose of reason goes a long way towards gaining a proper perspective of contagion risk. 

Not Every Model Can be a Supermodel

Infectious disease modeling is an important tool that can help predict the course of an outbreak and how specific interventions might fare. However, what is often lost in the course of the press reports of the model’s results and the simplified explanations given to policymakers, is the fact that this is a model and not in any way authoritative. All models are based on specific assumptions, which correspond to reality in varying degrees, as well as predictions about microbial transmission, human behavior, and other factors. All of these aspects of models lead to a degree of uncertainty in their predictions.

During the peak of the Ebola panic in the US, various modeling numbers, including one suggesting that there could be up to 1.4 million cases of Ebola, were in the press continually without the appropriate caveats such as the assumptions that formed the basis for the model that gave that result. Another important aspect of a model is whether it is deterministic and static or dynamic (stochastic). This is a vital distinction, as we know that during the course of an outbreak, things change. People modify their behavior, transmissibility changes, and countermeasures are deployed. Such facts severely limit the utility of static models as they are often unable to fully incorporate such dynamism. By contrast, stochastic models allow much more dynamism and therefore can be much more useful.

A new, somewhat technical, paper published in the Proceedings of the Royal Society B makes these important points using the Ebola outbreak as an example. In this paper, King et al. compared the results of using deterministic and stochastic models showing that deterministic models lead to an underestimation of uncertainty in prediction.

The points addressed in this paper are important to emphasize because in the midst of the complex decision-making that characterizes an outbreak response, understanding how uncertain or certain a possible scenario is can be vital. As such, highlighting the salient aspects of a model to policymakers and journalists can help ensure these high-consequence decisions are completely informed.   

Health Security 2014: Morals of the Story, Part 1

The year read like an infectious disease manifesto:  antimicrobial resistance on the rise in the US and around the world; perennial health threats of mumps, measles, and whooping cough afflicting different parts of the US; MERS continuing to simmer in the Middle East and H7N9 in China.  Americans learned about new infectious diseases in their midst or just over the horizon --Enterovirus D68 and Chikungunya.   Ebola, the most alarming of all new epidemics this year, tore through West Africa, and led to a small number of cases and a great level of national concern in the US.  On top of this litany of epidemics came a series of safety incidents at top national labs. There was the accidental exposure of CDC personnel to anthrax; the discovery of live smallpox samples in an FDA lab; an erroneous shipment from CDC of a dangerous influenza strain; and now this week, a potential laboratory exposure to Ebola at CDC.  Meantime, debate continued about the safety and value of research intended to create highly pathogenic, highly transmissible strains of viruses.  If people didn't think they had a reason to tune in to health security at the start of this year, they sure have reason to now.  

Preparedness for potential health security emergencies typically receives relatively little attention in the 24 hour news cycle (and the national budget). It’s in our human and political nature to respond to the burning fires at hand, while leaving preparedness for future, uncertain crises for another day.  But if there is one constructive thing that came out of the repeated health shocks of 2014, it’s the fact that health security couldn’t be ignored.  The past year showed, among other things, the importance of hospital preparedness and public health response, scientific research and development that leads to effective diagnostics and countermeasures, and reliable communications between health officials and the public.

If the year read like an infectious disease Odyssey, the lessons learned can be read like Aesop’s Fables:   

Expect the unexpected

At the start of 2014, no one working on infectious disease response or health security related issues would have predicted that Ebola would emerge as the most important epidemic in a generation.  Few would have anticipated that CDC (and more recently, the UK) would have struggled with a string of laboratory accidents.  Did anyone expect an emergent enterovirus would start causing paralytic illness in children in the US?  And so on.  No doubt there will be unexpected health security challenges in 2015 mixed in with the continuing problems already in view.  We need professionals, institutions, and systems in place that are strong and nimble enough to rise to meet the challenges. Are we prepared for all of this now? We aren’t, but we can continue to get better and to track our progress.  To have experienced people and strong organizations in place will require sustained resources that don’t rise and fall crisis to crisis.    

It’s thrifty to prepare today for the wants of tomorrow

One reason Ebola has devastated Liberia, Sierra Leone and Guinea is the terrible state of the health care infrastructure in those countries.  These systems were both very poor in resources and had far too few health care workers for national needs.  Another reason that the Ebola epidemic got going in the first place was because of the terrible delays in getting an actual diagnosis.  The disease spread from village to village in Guinea with no one knowing what was going on.  Unreliable technology then led to the misdiagnosis of cholera, which slowed down response ever further.  These and the many other signs of the fragile West African health system should help us understand that similar conditions exist in many other parts of the world.  It is the collective interest of the global community to improve the health care and outbreak response capacity of those countries that have the least resources.  One sign that countries are paying attention to this is the Global Health Security Agenda (GHSA), an effort of over 40 countries that are taking specific steps to improve the worldwide capacity to prevent, detect, and respond to infectious disease threats that are sure to astonish us in the future.  The more we help create critical health care infrastructure through efforts like the GHSA, the less we will suffer Ebola- like events of tomorrow.   

It’s one thing to propose, another to execute

When even our some of our best labs in the country experience accidents, it’s evident that accidents can happen in any lab.  This experience of accidents at CDC shows us there is more work to do in building good biosafety culture and programs.  Biosafety programs need to be supported by leaders and funded accordingly.  Institutional Biosafety Committees need to have the expertise and resources they need to do the work they are asked to do.  Scientists need to be able to report concerns and mistakes without fear of reprisal.  Research into infectious disease pathogens will continue to be critical for discovering new vaccines, medicines and diagnostics.  Overall very few laboratorians are seriously sickened or hurt in US labs.  But there is a very small area of research that poses special risks -- research intended to create highly pathogenic, highly transmissible strains of viruses. This area of research needs to be examined with great caution, as is now occurring during the research moratorium for that work.  While in the planning and proposal phase, biosafety risks may appear to be fully controlled and manageable, in the execution phase, human error or other lapses can occur.