Uncertainty in the Time of COVID-19, Part 2

Part 2: How Do We Know What We Know?

When a new pathogen first shows up to threaten human lives, ignorance dominates knowledge. The faster we retire our ignorance and maximize our knowledge, the better our response to any novel threat. The good news is that knowledge of what is happening during the current COVID-19 pandemic is accumulating more rapidly than it did during the SARS outbreak, in part because we have new tools available, and in part because Chinese clinicians and scientists are publishing more, and faster, than in 2003. And yet there is still a great deal of ignorance about this pathogen, and that ignorance breeds uncertainty. While it is true that the virus we are now calling SARS-CoV-2 is relatively closely related genetically to the SARS-CoV that emerged in 2002, the resulting disease we call COVID-19 is notably different than SARS. This post will dig into what methods and tools are being used today in diagnosis and tracking, what epidemiological knowledge is accumulating, and what error bars and assumptions are absent, being misunderstood, or are errant.

First, in all of these posts I will keep a running update of good sources of information. The Atlantic continues its excellent reporting into lack of testing in the US by digging into the decision-making process, or lack thereof, that resulted in our current predicament. I am finding it useful to read the China CDC Weekly Reports, which constitute source data and anecdotes used in many other articles and reports.

Before diving in any further, I would observe that it is now clear that extreme social distancing works to halt the spread of the virus, at least temporarily, as demonstrated in China. It is also clear that, with widespread testing, the spread can also be controlled with less severe restrictions — but only if you assay the population adequately, which means running tests on as many people as possible, not just those who are obviously sick and in hospital.

Why does any of this matter?

In what follows, I get down into the weeds of sources of error and of sampling strategies. I suggest that the way we are using tests is obscuring, rather than helping, our ability to understand what is happening. You might look at this, if you are an epidemiologist or public health person, and say that these details are irrelevant because all we really care about are actions that work to limit or slow the spread. Ultimately, as the goal is to save lives and reduce suffering, and since China has demonstrated that extreme social distancing can work to limit the spread of COVID-19, the argument might be that we should just implement the same measures and be done with it. I am certainly sympathetic to this view, and we should definitely implement measures to restrict the spread of the virus.

But it isn’t that simple. First, because the population infection data is still so poor, even in China (though perhaps not in South Korea, as I explore below) every statement about successful control is in actuality still a hypothesis, yet to be tested. Those tests will come in the form of 1) additional exposure data, such as population serology studies that identify the full extent of viral spread by looking for antibodies to the virus, which persist long after an infection is resolved, and 2) carefully tracking what happens when social distancing and quarantine measures are lifted. Prior pandemics, in particular the 1918 influenza episode, showed waves of infections that reoccured for years after the initial outbreak. Some of those waves are clearly attributable to premature reduction in social distancing, and different interpretations of data may have contributed to those decisions. (Have a look at this post by Tomas Pueyo, which is generally quite good, for the section with the heading “Learnings from the 1918 Flu Pandemic”.) Consequently, we need to carefully consider exactly what our current data sets are teaching us about SARS-CoV-19 and COVID-19, and, indeed, whether current data sets are teaching us anything helpful at all.

What is COVID-19?

Leading off the discussion of uncertainty are differences in the most basic description of the disease known as COVID-19. The list of observed symptoms — that is, visible impacts on the human body — from the CDC includes only fever, cough, and shortness of breath, while the WHO website list is more expansive, with fever, tiredness, dry cough, aches and pains, nasal congestion, runny nose, sore throat, or diarrhea. The WHO-China Joint Mission report from last month (PDF) is more quantitative: fever (87.9%), dry cough (67.7%), fatigue (38.1%), sputum production (33.4%), shortness of breath (18.6%), sore throat (13.9%), headache (13.6%), myalgia or arthralgia (14.8%), chills (11.4%), nausea or vomiting (5.0%), nasal congestion (4.8%), diarrhea (3.7%), and hemoptysis (0.9%), and conjunctival congestion (0.8%). Note that the preceding list, while quantitative in the sense that it reports the frequency of symptoms, is ultimately a list of qualitative judgements by humans.

The Joint Mission report continues with a slightly more quantitative set of statements:

Most people infected with COVID-19 virus have mild disease and recover. Approximately 80% of laboratory confirmed patients have had mild to moderate disease, which includes non-pneumonia and pneumonia cases, 13.8% have severe disease (dyspnea, respiratory frequency ≥30/minute, blood oxygen saturation ≤93%, PaO2/FiO2 ratio <300, and/or lung infiltrates >50% of the lung field within 24-48 hours) and 6.1% are critical (respiratory failure, septic shock, and/or multiple organ dysfunction/failure).

The rate of hospitalization, seriousness of symptoms, and ultimately the fatality rate depend strongly on age and, in a source of more uncertainty, perhaps on geography, points I will return to below.

What is the fatality rate, and why does it vary so much?

The Economist has a nice article exploring the wide variation in reported and estimated fatality rates, which I encourage you to read (also this means I don’t have to write it). One conclusion from that article is that we are probably misestimating fatalities due to measurement error. The total rate of infection is probably higher than is being reported, and the absolute number of fatalities is probably higher than generally understood. To this miscalculation I would add an additional layer of obfuscation, which I happened upon in my earlier work on SARS and the flu.

It turns out that we are probably significantly undercounting deaths due to influenza. This hypothesis is driven by a set of observations of anticorrelations between flu vaccination and deaths ascribed to stroke, myocardial infarction (“heart attack”), and “sudden cardiac death”, where the latter is the largest cause of “natural” death in the United States. Influenza immunization reduces the rate of those causes of death by 50-75%. The authors conclude that the actual number of people who die from influenza infections could be 4X-2.5-5X higher than the oft cited 20,000-40,000.

How could the standard estimate be so far off? Consider these two situations: First, if a patient is at the doctor or in the hospital due to symptoms of the flu, they are likely to undergo a test to rule in, or out, the flu. But if a patient comes into the ER in distress and then passes away, or if they die before getting to the hospital, then that molecular diagnostic is much less likely to be used. And if the patient is elderly and already suffering from an obvious likely cause of death, for example congestive heart failure, kidney failure, or cancer, then that is likely to be what goes on the death certificate. Consequently, particularly among older people with obvious preexisting conditions, case fatality rate for influenza is likely to be underestimated, and that is for a pathogen that is relatively well understood for which there is unlikely to be a shortage of diagnostic kits.

Having set that stage, it is no leap at all to hypothesize that the fatality rate for COVID-19 is likely to be significantly underestimated. And then if you add in insufficient testing, and thus insufficient diagnostics, as I explore below, it seems likely that many fatalities caused by COVID-19 will be attributed to something else, particularly among the elderly. The disease is already quite serious among those diagnosed who are older than 70. I expect that the final toll will be greater in communities that do not get the disease under control.

Fatality rate in China as reported by China CDC.

Fatality rate in China as reported by China CDC.

How is COVID-19 diagnosed?

For most of history, medical diagnoses have been determined by comparing patient symptoms (again, these are human-observable impacts on a patent, usually constituting natural language nouns and adjectives) with lists that doctors together agree define a particular condition. Recently, this qualitative methodology has been slowly amended with quantitative measures as they have become available: e.g., pulse, blood pressure, EEG and EKG, blood oxygen content, “five part diff” (which quantifies different kinds of blood cells), CT, MRI, blood sugar levels, liver enzyme activity, lung and heart pumping volume, viral load, and now DNA and RNA sequencing of tissues and pathogens. These latter tools have become particularly important in genetically tracking the spread of #SARS-CoV-2, because by following the sequence around the world you can sort out at the individual case level where it came from. And then simply being able to specifically detect viral RNA to provide a diagnosis is important because COVID-19 symptoms (other than fatality rate) are quite similar to that of the seasonal flu. Beyond differentiating COVID-19 from “influenza like illness”, new tools are being brought to bear that enable near real time quantification of viral RNA, which enables estimating viral load (number of viruses per sample volume), and which in turn facilitates 1) understanding how the disease progresses and then 2) how infectious patients are over time. These molecular assays are the result of decades of technology improvement, which has resulted in highly automated systems that take in raw clinical samples, process them, and deliver results electronically. At least in those labs that can afford such devices. Beyond these achievements, novel diagnostic methods based on the relatively recent development of CRISPR as a tool are already in the queue to be approved for use amidst the current pandemic. The pandemic is serving as a shock to the system to move diagnostic technology faster. We are watching in real time a momentous transition in the history of medicine, which is giving us a glimpse of the future. How are all these tools being applied today?

(Note: My original intention with this post was to look at the error rates of all the steps for each diagnostic method. I will explain why I think this is important, but other matters are more pressing at present, so the detailed error analysis will get short shrift for now.)

Recapitulating an explanation of relevant diagnostics from Part 1 of this series (with a slight change in organization):

There are three primary means of diagnosis:

1. The first is by display of symptoms, which can span a long list of cold-like runny nose, fever, sore throat, upper respiratory features, to much less pleasant, and in some cases deadly, lower respiratory impairment. (I recently heard an expert on the virus say that there are two primary ways that SARS-like viruses can kill you: “Either your lungs fill up with fluid, limiting your access to oxygen, and you drown, or all the epithelial cells in your lungs slough off, limiting your access to oxygen, and you suffocate.” Secondary infections are also more lethal for people experiencing COVID-19 symptoms.)

2. The second method of diagnosis is imaging of lungs, which includes x-ray and CT scans; SARS-CoV-2 causes particular pathologies in the lungs that can be identified on images and that distinguish it from other respiratory viruses.

3. Thirdly, the virus can be diagnosed via two molecular assays, the first of which uses antibodies to directly look for viral proteins in tissue or fluid samples, while the other looks for whether genetic material is present; sophisticated versions can quantify how many copies of viral RNA are present in a sample.

Imaging of lungs via x-ray and CT scan appears to be an excellent means to diagnose COVID-19 due to a distinct set of morphological features that appear throughout infected tissue, though those features also appear to change during the course of the disease. This study also examined diagnosis via PCR assays, and found a surprisingly high rate of false negatives. It is not clear from the text whether all patients had two independent swabs and accompanying tests, so either 10 or 12 total tests were done. If 10 were done, there are two clear false negatives, for a 20% failure rate; if 12 were done, there are up to four false negatives, for a 33% failure rate. The authors observe that “the false negative rate of oropharyngeal swabs seems high.” Note that this study directly compares the molecular assay with imaging, and the swab/PCR combo definitely comes up short. This is important because for us to definitively diagnose even the number of serious cases, let alone start sampling the larger population to track and try to get ahead of the outbreak, imaging is low throughput and expensive; we need rapid, accurate molecular assays. We need to have confidence in testing.

How does “testing” work? First, testing is not some science fiction process that involves pointing a semi-magical instrument like a Tricorder at a patient and instantly getting a diagnosis. In reality, testing involves multiple process steps implemented by humans — humans who sometimes are inadequately trained or who make mistakes. And then each of those process steps has an associated error or failure rate. You almost never hear about the rate of mistakes, errors, or failures in reporting on “testing”, and that is a problem.

Let’s take the testing process in order. For sample collection the CDC Recommendations include nasopharyngeal and oropharyngeal (i.e., nose and throat) swabs. Here is the Wikipedia page on RT-PCR, which is a pretty good place to start if you are new to these concepts.

The Seattle Flu Study and the UW Virology COVID-19 program often rely on home sample collection from nasal and throat swabs. My initial concern about this testing method was motivated in part by the fact that it was quite difficult to develop a swab-PCR for SARS-CoV that delivered consistent results, where part of the difficulty was simply in collecting a good patient sample. I have a nagging fear that not everyone who is collecting these samples today is adequately trained to get a good result, or that they are tested to ensure they are good at this skill. The number of sample takers has clearly expanded significantly around the world in the last couple of weeks, with more expansion to come. So I leave this topic with a question: is there a clinical study that examines the success rate sample collection by people who are not trained to do this every day?

On to the assays themselves: I am primarily concerned at the moment with the error bars on the detection assays. The RT-PCR assay data in China are not reported with errors (or even variance, which would be an improvement). Imaging is claimed to be 90-95% accurate (against what standard is unclear), and the molecular assays worse than that by some amount. Anecdotal reports are that they have only been 50-70% accurate, with assertions of as low as 10% in some cases. This suggests that, in addition to large probable variation in the detectable viral load, and possible quality variations in the kits themselves, human sample handling and lab error is quite likely the dominant factor in accuracy. There was a report of an automated high throughput testing lab getting set up in a hurry in Wuhan a couple of weeks ago, which might be great if the reagents quality is sorted, but I haven’t seen any reports of whether that worked out. So the idea that the “confirmed” case counts are representative of reality even in hospitals or care facilities is tenuous at best. South Korea has certainly done a better job of adequate testing, but even there questions remain about the accuracy of the testing, as reported by the Financial Times:

Hong Ki-ho, a doctor at Seoul Medical Centre, believed the accuracy of the country’s coronavirus tests was “99 per cent — the highest in the world”. He pointed to the rapid commercial development and deployment of new test kits enabled by a fast-tracked regulatory process. “We have allowed test kits based on WHO protocols and never followed China’s test methods,” Dr Hong said.

However, Choi Jae-wook, a medical professor of preventive medicine at Korea University, remained “worried”. “Many of the kits used at the beginning stage of the outbreak were the same as those in China where the accuracy was questioned . . . We have been hesitating to voice our concern because this could worry the public even more,” Mr Choi said.

At some point (hopefully soon) we will see antibody-based tests being deployed that will enable serology studies of who has been previously infected. The US CDC is developing these serologic tests now, and we should all hope the results are better than the initial round of CDC-produced PCR tests. We may also be fortunate and find that these assays could be useful for diagnosis, as lateral flow assays (like pregnancy tests) can be much faster than PCR assays. Eventually something will work, because this antibody detection is tried and true technology.

To sum up: I had been quite concerned about reports of problems (high error rates) with the PCR assay in China and in South Korea. Fortunately, it appears that more recent PCR data is more trustworthy (as I will discuss below), and that automated infrastructure being deployed in the US and Europe may improve matters further. The automated testing instruments being rolled out in the US should — should — have lower error rates and higher accuracy. I still worry about the error rate on the sample collection. However, detection of the virus may be facilitated because the upper respiratory viral load for SARS-CoV-2 appears to be much higher than for SARS-CoV, a finding with further implications that I will explore below.

How is the virus spread?

(Note: the reporting on asymptomatic spread has changed a great deal just in the last 24 hours. Not all of what appears below is updated to reflect this yet.)

The standard line, if there can be one at this point, has been that the virus is spread by close contact with symptomatic patients. This view is bolstered by claims in the WHO Joint Mission report: “Asymptomatic infection has been reported, but the majority of the relatively rare cases who are asymptomatic on the date of identification/report went on to develop disease. The proportion of truly asymptomatic infections is unclear but appears to be relatively rare and does not appear to be a major driver of transmission.”(p.12) These claims are not consistent with a growing body of clinical observations. Pinning down the rate of asymptomatic, or presymptomatic, infections is important for understanding how the disease spreads. Combining that rate with evidence that patients are infectious while asymptomatic, or presymptomatic, is critical for planning response and for understanding the impact of social distancing.

Two sentences in the Science news piece describing the Joint Mission report undermine all the quantitative claims about impact and control: “A critical unknown is how many mild or asymptomatic cases occur. If large numbers of infections are below the radar, that complicates attempts to isolate infectious people and slow spread of the virus.” Nature picked up this question earlier this week: “How much is coronavirus spreading under the radar?” The answer: probably quite a lot.

A study of cases apparently contracted in a shopping mall in Wenzhou concluded that the most likely explanation for the pattern of spread is “that indirect transmission of the causative virus occurred, perhaps resulting from virus contamination of common objects, virus aerosolization in a confined space, or spread from asymptomatic infected persons.”

Another recent paper in which the authors built an epidemiological transmission model all the documented cases in Wuhan found that, at best, only 41% of the total infection were “ascertained” by diagnosis, while the most likely acertainment rate was a mere 21%. That is, the model best fits the documented case statistics when 79% of the total infections were unaccounted for by direct diagnosis.

Finally, a recent study of patients early after infection clearly shows “that COVID-19 can often present as a common cold-like illness. SARS-CoV-2 can actively replicate in the upper respiratory tract, and is shed for a prolonged time after symptoms end, including in stool.” The comprehensive virological study demonstrates “active [infectious] virus replication in upper respiratory tract tissues”, which leads to a hypothesis that people can present with cold-like symptoms and be infectious. I will quote more extensively from the abstract, as this bit is crucially important:

Pharyngeal virus shedding was very high during the first week of symptoms (peak at 7.11 X 10^8 RNA copies per throat swab, day 4). Infectious virus was readily isolated from throat- and lung-derived samples, but not from stool samples in spite of high virus RNA concentration. Blood and urine never yielded virus. Active replication in the throat was confirmed by viral replicative RNA intermediates in throat samples. Sequence-distinct virus populations were consistently detected in throat- and lung samples of one same patient. Shedding of viral RNA from sputum outlasted the end of symptoms. Seroconversion occurred after 6-12 days, but was not followed by a rapid decline of viral loads.

That is, you can be sick for a week with minimal- to mild symptoms, shedding infectious virus, before antibodies to the virus are detectable. (This study also found that “Diagnostic testing suggests that simple throat swabs will provide sufficient sensitivity at this stage of infection. This is in stark contrast to SARS.” Thus my comments above about reduced concern about sampling methodology.)

So the virus is easy to detect because it is plentiful in the throat, which unfortunately also means that it is easy to spread. And then even after you begin to have a specific immune response, detectable as the presence of antibodies in blood, viral loads stay high.

The authors conclude, rather dryly, with an observation that “These findings suggest adjustments of current case definitions and re-evaluation of the prospects of outbreak containment.” Indeed.

One last observation from this paper is eye opening, and needs much more study: “Striking additional evidence for independent replication in the throat is provided by sequence findings in one patient who consistently showed a distinct virus in her throat as opposed to the lung.” I am not sure we have seen something like this before. Given the high rate of recombination between strains in this family of betacoronaviruses (see Part 1), I want to flag the infection of different tissues by different strains as a possibly worrying route to more viral innovation, that is, evolution.

STAT+ News summarizes the above study as follows:

The researchers found very high levels of virus emitted from the throat of patients from the earliest point in their illness —when people are generally still going about their daily routines. Viral shedding dropped after day 5 in all but two of the patients, who had more serious illness. The two, who developed early signs of pneumonia, continued to shed high levels of virus from the throat until about day 10 or 11.

This pattern of virus shedding is a marked departure from what was seen with the SARS coronavirus, which ignited an outbreak in 2002-2003. With that disease, peak shedding of virus occurred later, when the virus had moved into the deep lungs.

Shedding from the upper airways early in infection makes for a virus that is much harder to contain. The scientists said at peak shedding, people with Covid-19 are emitting more than 1,000 times more virus than was emitted during peak shedding of SARS infection, a fact that likely explains the rapid spread of the virus. 

Yesterday, CNN joined the chorus of reporting on the role asymptomatic spread. It is a nice summary, and makes clear that not only is “presymptomatic transmission commonplace”, it is a demonstrably significant driver of infection. Michael Osterholm, director of the Center for Infectious Disease Research (CIDRAP) and Policy at the University of Minnesota, and always ready with a good quote, was given the opportunity to put the nail in the coffin on the denial of asymptomatic spread:

"At the very beginning of the outbreak, we had many questions about how transmission of this virus occurred. And unfortunately, we saw a number of people taking very firm stances about it was happening this way or it wasn't happening this way. And as we have continued to learn how transmission occurs with this outbreak, it is clear that many of those early statements were not correct," he said. 

"This is time for straight talk," he said. "This is time to tell the public what we know and don't know."

There is one final piece of the puzzle that we need to examine to get a better understanding of how the virus is spreading. You may have read about characterizing the infection rate by the basic reproduction number, R0, which is a statistical measure that captures the average dynamics of transmission. There is another metric the “secondary attack rate”, or SAR, which is a measurement of the rate of transmission in specific cases in which a transmission event is known to have occurred. The Joint Mission report cites an SAR in the range of 5-10% in family settings, which is already concerning. But there is another study (that, to be fair, came out after the Joint Mission report) of nine instances in Wuhan that calculates the secondary attack rate in specific community settings is 35%. That is, assuming one initially infected person per room attended an event in which spread is known to have happened, on average 35% of those present were infected. In my mind, this is the primary justification for limiting social contacts — this virus appears to spread extremely well when people are in enclosed spaces together for a couple of hours, possibly handling and sharing food.

Many missing pieces must be filled in to understand whether the high reported SAR above is representative globally. For instance, what where the environmental conditions (humidity, temperature) and ventilation like at those events? Was the source of the virus a food handler, or otherwise a focus of attention and close contact, or were they just another person in the room? Social distancing and eliminating public events was clearly important in disrupting the initial outbreak in Wuhan, but without more specific information about how community spread occurs we are just hanging on, hoping old fashioned public health measures will slow the thing down until countermeasures (drugs and vaccines) are rolled out. And when the social control measures are lifted, the whole thing could blow up again. Here is Osterholm again, from the Science news article covering the Joint Mission report:

“There’s also uncertainty about what the virus, dubbed SARS-CoV-2, will do in China after the country inevitably lifts some of its strictest control measures and restarts its economy. COVID-19 cases may well increase again.”

“There’s no question they suppressed the outbreak,” says Mike Osterholm, head of the Center for Infectious Disease Research and Policy at the University of Minnesota, Twin Cities. “That’s like suppressing a forest fire, but not putting it out. It’ll come roaring right back.”

What is the age distribution of infections?

The short answer here is that everyone can get infected. The severity of one’s response appears to depend strongly on age, as does the final outcome of the disease (the “endpoint”, as it is somewhat ominously referred to). Here we run smack into another measurement problem, because in order to truly understand who is infected, we would need to be testing broadly across the population, including a generous sample of those who are not displaying symptoms. Because only South Korea has been sampling so widely, only South Korea appears to have a data set that gives some sense of the age distribution of infections across the whole population. Beyond the sampling problem, I found it difficult to find this sort of demographic data published anywhere on the web.

Below is the only age data I have been able to come up with, admirably cobbled together by Andreas Backhaus from screenshots of data out of South Korea and Italy.

Why would you care about this? Because, in many countries, policy makers have not yet closed schools, restaurants, or pubs that younger and healthier members of the population tend to frequent. If this population is either asymptomatic or mildly symptomatic, but still infectious — as indicated above — then they are almost certainly spreading virus not only amongst themselves, but also to members of their families who may be more likely to experience severe symptoms. Moreover, I am led to speculate by the different course of disease in different communities that the structure of social contacts could be playing a significant role in the spread of the virus. Countries that have a relatively high rate of multi-generational households, in which elderly relatives live under the same roof as young people, could be in for a rough ride with COVID-19. If young people are out in the community, exposed to the virus, then their elderly relatives at home have a much higher chance of contracting the virus. Here is the distribution of multigenerational households by region, according to the UN:

Screen Shot 2020-03-15 at 8.39.46 PM.png

The end result of all this is that we — humanity at large, and in particular North America and Europe — need to do a much better job of containment in our own communities in order to reduce morbidity and mortality caused by SARS-CoV-2.

How did we get off track with our response?

It is important to understand how the WHO got the conclusion about the modes of infection wrong. By communicating so clearly that they believed there was a minimal role for asymptomatic spread, the WHO sent a mixed message that, while extreme social distancing works, perhaps it was not so necessary. Some policy makers clearly latched onto the idea that the disease only spreads from very sick people, and that if you aren’t sick then you should continue to head out to the local pub and contribute to the economy. The US CDC seems to have been slow to understand the error (see the CNN story cited above), and the White House just ran with the version of events that seemed like it would be politically most favorable, and least inconvenient economically.

The Joint Mission based the assertion that asymptomatic and presymptomatic infection is “rare” on a study in Guangdong Province. Here is Science again: “To get at this question, the report notes that so-called fever clinics in Guangdong province screened approximately 320,000 people for COVID-19 and only found 0.14% of them to be positive.” Caitlin Rivers, from Johns Hopkins, hit the nail on the head by observing that “Guangdong province was not a heavily affected area, so it is not clear whether [results from there hold] in Hubei province, which was the hardest hit.”

I am quite concerned (and, frankly, disappointed) that the WHO team took at face value that the large scale screening effort in Guangdong that found a very low “asymptomatic count” is somehow representative of anywhere else. Guangdong has a ~50X lower “case count” than Hubei, and a ~400X lower fatality rate, according to the Johns Hopkins Dashboard on 15 March — the disparity was probably even larger when the study was performed. The course of the disease was clearly quite different in Guangdong than in Hubei.

Travel restrictions and social distancing measures appear to have had a significant impact on spread from Hubei to Guangdong, and within Guangdong, which means that we can’t really know how many infected individuals were in Guangdong, or how many of those were really out in the community. A recent study computed the probability of spread from Wuhan to other cities given both population of the city and number of inbound trips from Wuhan; for Guangzhou, in Guangdong, the number of infections was anomalously low given its very large population. That is, compared with other transmission chains in China, Guangdong wound up with many fewer cases that you would expect, and the case count there is therefore not representative. Consequently, the detected infection rate in Guangdong is not a useful metric for understanding anything but Guangdong. The number relevant for epidemiological modeling is the rate of asymptomatic infection in the *absence* of control measures, because that tells us how the virus behaves without draconian social distancing, and any return to normalcy in the world will not have that sort of control measure in place.

Now, if I am being charitable, it may have been that the only large scale screening data set available to the Joint Mission at the time was from Guangdong. The team needed to publish a report, and saying something about asymptomatic transmission was critically important to telling a comprehensive story, so perhaps they went with the only data they had. But the conclusions smelled wrong to me as soon as they were announced. I wrote as much to several reporters and on Twitter, observing that the WHO report was problematic because it assumed the official case counts approximated the actual number of infections, but I couldn’t put my finger on exactly what bugged me until I could put together the rest of the story above. Nevertheless, the WHO has a lot of smart people working for it; why did the organization so quickly embrace and promulgate a narrative that was so obviously problematic to anyone who knows about epidemiology and statistics?

What went wrong at the WHO?

There are some very strong opinions out there regarding the relationship between China and the WHO, and how that relationship impacts the decisions made by Director-General Dr. Tedros Adhanom. I have not met Dr. Tedros and only know what I read about him. However, I do have personal experience with several individuals now higher up in the chain of command for the WHO coronavirus response, and I have no confidence in them whatsoever. Here is my backstory.

I have wandered around the edges of the WHO for quite a while, and have spent most of my time in Geneva at the UN proper and working with the Biological Weapons Convention Implementation Support Unit. Then, several years ago, I was asked to serve on a committee at WHO HQ. I wasn’t particularly enthusiastic about saying yes, but several current and former high ranking US officials convinced me it was for the common good. So I went. It doesn’t matter which committee at the moment. What does matter is that, when it came time to write the committee report, I found that the first draft embraced a political narrative that was entirely counter to my understanding of the relevant facts, science, and history. I lodged my objections to the draft in a long minority report that pointed out the specific ways in which the text diverged from reality. And then something interesting happened.

I received a letter informing me that my appointment to the committee had been a mistake, and that I was actually supposed to be just a technical advisor. Now, the invitation said “member”, and all the documents that I signed beforehand said “member”, with particular rights and responsibilities, including a say in the text of the report. I inquired with the various officials who had encouraged me to serve, as well as with a diplomat or two, and the unanimous opinion was that I had been retroactively demoted so that the report could be written without addressing my concerns. All of those very experienced people were quite surprised by this turn of events. In other words, someone in the WHO went to surprising lengths to try to ensure that the report reflected a particular political perspective rather than facts, history, and science. Why? I do not know what the political calculations were. But I do know this: the administrative leadership in charge of the WHO committee I served on is now high up in the chain of command for the coronavirus response.

Coda: as it turns out, the final report hewed closely to reality as I understood it, and embraced most of the points I wanted it to make. I infer, but do not know for certain, that one or more other members of the committee — who presumably could not be shunted aside so easily, and who presumably had far more political heft than I do — picked up and implemented my recommended changes. So alls well that ends well? But the episode definitely contributed to my education (and cynicism) about how the WHO balances politics and science, and I am ill disposed to trust the organization. Posting my account may mean that I am not invited to hang out at the WHO again. This is just fine.

How much bearing does my experience have on what is happening now in the WHO coronavirus response? I don’t know. You have to make up your own mind about this. But having seen the sausage being made, I am all too aware that the organization can be steered by political considerations. And that definitely increases uncertainty about what is happening on the ground. I won’t be writing or saying anything more specific about that particular episode at this time.

Uncertainty in the Time of COVID-19, Part 1

Part 1: Introduction

Times being what they are, in which challenging events abound and good information is hard to come by, I am delving back into writing about infectious disease (ID). While I’ve not been posting here about the intersection of ID, preparedness, and biosecurity, I have continued to work on these problems as a consultant for corporations, the US government, and the WHO. More on that in a bit, because my experience on the ground at the WHO definitely colors my perception of what the organization has said about events in China.

These posts will primarily be a summary of what we do, and do not, know about the current outbreak of the disease named COVID-19, and its causative agent, a coronavirus known officially as SARS-CoV-2 (for “SARS coronavirus-2”). I am interested in 1) what the ground truth is as best we can get to it in the form of data (with error bars), and I am interested in 2) claims that are made that are not supported by that data. You will have read definitive claims that COVID-19 will be no worse than a bad flu, and you will have read definitive claims that the sheer number of severe cases will overwhelm healthcare systems around the world, potentially leading to shocking numbers of fatalities. The problem with any definitive claim at this point is that we still have insufficient concrete information about the basic molecular biology of the virus and the etiology of this disease to have a good idea of what is going to happen. Our primary disadvantage right now is that uncertainty, because uncertainty necessarily complicates both our understanding of the present and our planning for the future.

Good sources of information: If you want to track raw numbers and geographical distribution, the Johns Hopkins Coronavirus COVID-19 Global Cases dashboard is a good place to start, with the caveat that “cases” here means those officially reported by national governments, which data are not necessarily representative of what is happening out in the real world. The ongoing coverage at The Atlantic about testing (here, and here, for starters) is an excellent place to read up on the shortcomings of the current US approach, as well as to develop perspective on what has happened as a result of comprehensive testing in South Korea. Our World In Data has a nice page, updated often, that provides a list of basic facts about the virus and its spread (again with a caveat about “case count”). Nextstrain is a great tool to visualize how the various mutations of SARS-CoV-2 are moving around the world, and changing as they go. That we can sequence the virus so quickly is a welcome improvement in our response, as it allows sorting out how infection is spreading from one person to another, and one country to another. This is a huge advance in human capability to deal with pathogen outbreaks. However, and unfortunately, this is still retrospective information, and means we are chasing the virus, not getting ahead of it.

How did we get here?

My 2006 post, “Nature is Full of Surprises, and We Are Totally Unprepared”, summarizes some of my early work with Bio-era on pandemic preparedness and response planning, which involved looking back at SARS and various influenza epidemics in order to understand future events. One of the immediate observations you make from even a cursory analysis of outbreaks is that pathogen surveillance in both animals and humans needs to be an ongoing priority. Bio-era concluded that humanity would continue to be surprised by zoonotic events in the absence of a concerted effort to build up global surveillance capacity. We recommended to several governments that they address this gap by aggressively rolling out sampling and sequencing of wildlife pathogens. And then not much happened to develop and real surveillance capacity until — guess what — we were surprised again by the 2009 H1N1 (aka Mexican, aka Swine) flu outbreak, which nobody saw coming because nobody was looking in the right place.

In the interval since, particularly in the wake of the “West Africa” Ebola outbreak that started in 2013, global ID surveillance has improved. The following years also saw lots of news about the rise of the Zika virus and the resurgence of Dengue, about which I am certain we have not heard the last. In the US, epidemic planning and response was finally taken seriously at the highest levels of power, and a Global Health and Security team was established within the National Security Council. That office operated until 2018, when the current White House defunded the NSC capability as well as a parallel effort at DHS (read this Foreign Policy article by Laurie Garrett for perspective: “Trump Has Sabotaged America’s Coronavirus Response”). I am unable to be adequately politic about these events just yet, even when swearing like a sailor, so I will mostly leave them aside for now. I will try to write something about US government attitudes about preparing to deal with lethal infectious diseases under separate cover; in the meantime you might get some sense of my thinking from my memorial to virologist Mark Buller.

Surprise? Again?

Outside the US government, surveillance work has continued. The EcoHealth Alliance has been on the ground in China for many years now, sequencing animal viruses, particularly from bats, in the hopes of getting a jump on the next zoonosis. I was fortunate to work with several of the founders of the EcoHealth Alliance, Drs. Peter Daszak and Billy Karesh, during my time with Bio-era. They are good blokes. Colorful, to be sure — which you sort of have to be to get out of bed with the intention of chasing viruses into bat caves and jumping out of helicopters to take blood samples from large predators. The EcoHealth programs have catalogued a great many potential zoonotic viruses over the years, including several that are close relatives of both SARS-CoV (the causative agent of SARS) and SARS-CoV-2. And then there is Ralph Baric, at UNC, who with colleagues in China has published multiple papers over the years pointing to the existence of a cluster of SARS-like viruses circulating in animals in Hubei. See, in particular, “A SARS-like cluster of circulating bat coronaviruses shows potential for human emergence”, which called out in 2015 a worrisome group of viruses to which SARS-CoV-2 belongs. This work almost certainly could not have picked out that specific virus before it jumped to humans, because that would require substantially more field surveillance and more dedicated laboratory testing than has been possible with existing funding. But Baric and colleagues gave a clear heads up that something was brewing. And yet we were “surprised”, again. (Post publication note: For more on what has so far been learned about the origin of the virus, see this absolutely fantastic article in Scientific American that came out today: How China’s “Bat Woman” Hunted Down Viruses from SARS to the New Coronavirus, by Jane Qiu. I will come back to it in later installments of this series. It is really, really good.)

Not only were we warned, we have considerable historical experience that (wildlife consumption + coronavirus + humans) leads to zoonosis, or a disease that jumps from animals to humans. This particular virus still caught us unawares; it snuck up on us because we need to do a much better job of understanding how viruses jump from animal hosts to humans. Unless we start paying closer attention, it won’t be the last time. The pace of zoonotic events among viruses related to SARS-CoV has accelerated over the last 25 years, as I will explore in a forthcoming post. The primary reason for this acceleration, according to the wildlife veterinarians and virus hunters I talk to, is that humans continue to both encroach on natural habitats and to bring animals from those habitats home to serve for dinner. So in addition to better surveillance, humans could reduce the chance of zoonosis by eating fewer wild animals. Either way, the lesson of being surprised by SARS-CoV-2 is that we must work much harder to stay ahead of nature.

Why is the US, in particular, so unprepared to deal with this virus?

The US government has a long history of giving biological threats and health security inadequate respect. Yes, there have always been individuals and small groups inside various agencies and departments who worked hard to increase our preparedness and response efforts. But people at the top have never fully grasped what is at stake and what needs to be done.

Particularly alarming, we have recently experienced a unilateral disarming in the face of known and obvious threats. See the Laurie Garrett article cited above for details. As reported by The New York Times,

“Mr. Trump had no explanation for why his White House shut down the Directorate for Global Health Security and Biodefense established at the National Security Council in 2016 by President Barack Obama after the 2014 Ebola outbreak.”

Yet this is more complicated than is apparent or is described in the reporting, as I commented on Twitter earlier this week. National security policy in the US has been dominated for many decades by people who grew up intellectually in the Cold War, or were taught by people who fought the Cold War. Cold War security was about nation states and, most importantly, nuclear weapons. When the Iron Curtain fell, the concern about large nations (i.e., the USSR) slipped away for a while, eventually to be replaced by small states, terrorism, and WMDs. But WMD policy, which in principle includes chemical and biological threats, has continued to be dominated by the nuclear security crowd. The argument is always that nuclear (and radiological) weapons are more of a threat and can cause more damage than a mere microbe, whether natural or artificial. And then there is the spending associated with countering the more kinetic threats: the big, shiny, splody objects get all the attention. So biosecurity and pandemic preparedness and response, which often are lumped together as "health security", get short shrift because the people setting priorities have other priorities. This has been a problem for both Democrat and Republican administrations, and demonstrates a history of bipartisan blindness.

Then, after decades of effort, and an increasing number of emergent microbial/health threats, finally a position and office were created within the National Security Council. While far from a panacea, because the USG needs to do much more than have policy in place, this was progress.

And then a new Administration came in, which not only has different overall security priorities but also is dominated by old school security people who are focussed on the intersection of a small number of nation states and nuclear weapons. John Bolton, in particular, is a hardline neocon whose intellectual roots are in Cold War security policy; so he is focussed on nukes. His ascendence at the NSC was coincident not just with the NSC preparedness office being shut down, but also a parallel DHS office responsible for implementing policy. And then, beyond the specific mania driving a focus on nation states and nukes as the primary threats to US national security, there is the oft reported war on expertise in the current exec branch and EOP. Add it all up: The USG is now severely understaffed for the current crisis.

Even the knowledgeable professionals still serving in the government have been hamstrung by bad policy in their ability to organize a response. To be blunt: patients are dying because the FDA & CDC could not get out of the way or — imagine it — help in accelerating the availability of testing at a critical time in a crisis. There will be a reckoning. And then public health in the US will need to be rebuilt, and earn trust again. There is a long road ahead. But first we have to deal with SARS-CoV-2.

Who is this beastie, SARS-CoV-2?

Just to get the introductions out of the way, the new virus is classified within order Nidovirales, family Coronaviridae, subfamily Orthocoronaviridae. You may also see it referred to as a betacoronavirus. To give you some sense of the diversity of coronaviruses, here is a nice, clean visual representation of their phylogenetic relationships. It contains names of many familiar human pathogens. If you are wondering why we don’t have a better understanding of this family of viruses given their obvious importance to human health and to economic and physical security, good for you — you should wonder about this. For the cost of a single marginally functional F-35, let alone a white elephant new aircraft carrier, we could fund viral surveillance and basic molecular biology for all of these pathogens for years.

The diversity of pathogenic coronaviruses. Source: Xyzology.

The diversity of pathogenic coronaviruses. Source: Xyzology.

Betacoronaviruses (BCVs) are RNA viruses that are surrounded by a lipid membrane. The membrane is damaged by soap and by ethyl or isopropyl alcohol; without the membrane the virus falls apart. BCVs differ from influenza viruses in both their genome structure and in the way they evolve. Influenza viruses have segmented genomes — the genes are, in effect, organized into chromosomes — and the virus can evolve either through swapping chromosomes with other flu strains or through mutations that happen when the viral polymerase, which copies RNA, makes a mistake. The influenza polymerase makes lots of mistakes, which means that many different sequences are produced during replication. This is a primary driver of the evolution of influenza viruses, and largely explains why new flu strains show up every year. While the core of the copying machinery in Betacoronaviruses is similar to that of influenza viruses, it also contains an additional component called Nsp-14 that corrects copying mistakes. Disable or remove Nsp-14 and you get influenza-like mutation rates in Betacoronaviruses. (For some reason I find that observation particularly fascinating, though I can’t really explain why.)

There is another important feature of the BCV polymerase in that it facilitates recombination between RNA strands that happen to be floating around nearby. This means that if a host cell happens to be infected with more than one BCV strain at the same time, you can get a relatively high rate of new genomes being assembled out of all the parts floating around. This is one reason why BCV genome sequences can look like they are pasted together from strains that infect different species — they are often assembled exactly that way at the molecular level.

Before digging into the uncertainties around this virus and what is happening in the world, we need to understand how it is detected and diagnosed. There are three primary means of diagnosis. The first is by display of symptoms, which can span a long list of cold-like runny nose, fever, sore throat, upper respiratory features, to much less pleasant, and in some cases deadly, lower respiratory impairment. (I recently heard an expert on the virus say that there are two primary ways that SARS-like viruses can kill you: “Either your lungs fill up with fluid, limiting your access to oxygen, and you drown, or all the epithelial cells in your lungs slough off, limiting your access to oxygen, and you suffocate.” Secondary infections are also more lethal for people experiencing COVID-19 symptoms.) The second method of diagnosis is imaging of lungs, which includes x-ray and CT scans; SARS-CoV-2 causes particular pathologies in the lungs that can be identified on images and that distinguish it from other respiratory viruses. Finally, the virus can be diagnosed via two molecular assays, the first of which uses antibodies to directly look for viral proteins in tissue or fluid samples, while the other looks for whether genetic material is present; sophisticated versions can quantify how many copies of viral RNA are present in a sample.

Each of these diagnostic methods is usually described as being “accurate” or “sensitive” to some degree, when instead they should be described as having some error rate, a rate than might be dependent on when or where the method was applied, or might vary with who was applying it. And every time you read how “accurate” or “sensitive” a method is, you should ask: compared to what? And this is where we get into uncertainty.

Part 2 of this series will dig into specific sources of uncertainty spanning measurement and diagnosis to recommendations.

A memorial to Mark Buller, PhD, and our response to the propaganda film "Demon in the Freezer".

Earlier this year my friend and colleague Mark Buller passed away. Mark was a noted virologist and a professor at Saint Louis University. He was struck by a car while riding his bicycle home from the lab, and died from his injuries. Here is Mark's obituary as published by the university.

In 2014 and 2015, Mark and I served as advisors to a WHO scientific working group on synthetic biology and the variola virus (the causative agent of smallpox). In 2016, we wrote the following, previously un-published, response to an "Op-Doc" that appeared in the New York Times. In a forthcoming post I will have more to say about both my experience with the WHO and my thoughts on the recent publication of a synthetic horsepox genome. For now, here is the last version (circa May, 2016) of the response Mark I and wrote to the Op-Doc, published here as my own memorial to Professor Buller.


Variola virus is still needed for the development of smallpox medical countermeasures

On May 17, 2016 Errol Morris presented a short movie entitled “Demon in the Freezer” [note: quite different from the book of the same name by Richard Preston] in the Op-Docs section of the on-line New York Times. The piece purported to present both sides of the long-standing argument over what to do with the remaining laboratory stocks of variola virus, the causative agent of smallpox, which no longer circulates in the human population.

Since 1999, the World Health Organization has on numerous occasions postponed the final destruction of the two variola virus research stocks in Russia and the US in order to support public health related research, including the development of smallpox molecular diagnostics, antivirals, and vaccines.  

“Demon in the Freezer” clearly advocates for destroying the virus. The Op-Doc impugns the motivation of scientists carrying out smallpox research by asking: “If given a free hand, what might they unleash?” The narrative even suggests that some in the US government would like to pursue a nefarious policy goal of “mutually assured destruction with germs”. This portion of the movie is interlaced with irrelevant, hyperbolic images of mushroom clouds. The reality is that in 1969 the US unilaterally renounced the production, storage or use biological weapons for any reason whatsoever, including in response to a biologic attack from another country. The same cannot be said for ISIS and Al-Qaeda. In 1975 the US ratified the 1925 Geneva Protocol banning chemical and biological agents in warfare and became party to the Biological Weapons Convention that emphatically prohibits the use of biological weapons in warfare.

“Demon in the Freezer” is constructed with undeniable flair, but in the end it is a benighted 21st century video incarnation of a middling 1930's political propaganda mural. It was painted with only black and white pigments, rather than a meaningful palette of colors, and using a brush so broad that it blurred any useful detail. Ultimately, and to its discredit, the piece sought to create fear and outrage based on unsubstantiated accusations.

Maintaining live smallpox virus is necessary for ongoing development and improvement of medical countermeasures. The first-generation US smallpox vaccine was produced in domesticated animals, while the second-generation smallpox vaccine was manufactured in sterile bioreactors; both have the potential to cause serious side effects in 10-20% of the population. The third generation smallpox vaccine has an improved safety profile, and causes minimal side effects. Fourth generation vaccine candidates, based on newer, lower cost, technology, will be even safer and some are in preclinical testing. There remains a need to develop rapid field diagnostics and an additional antiviral therapy for smallpox.

Continued vigilance is necessary because it is widely assumed that numerous undeclared stocks of variola virus exist around the world in clandestine laboratories. Moreover, unsecured variola virus stocks are encountered occasionally in strain collections left behind by long-retired researchers, as demonstrated in 2014 with the discovery of 1950s vintage variola virus in a cold room at the NIH. The certain existence of unofficial stocks makes destroying the official stocks an exercise in declaring “victory” merely for political purposes rather than a substantive step towards increasing security. Unfortunately, the threat does not end with undeclared or forgotten samples.

In 2015 a WHO Scientific Working Group on Synthetic Biology and Variola Virus and Smallpox determined that a “skilled laboratory technician or undergraduate student with experience of working with viruses” would be able to generate variola virus from the widely available genomic sequence in “as little as three months”. Importantly, this Working Group concluded that “there will always be the potential to recreate variola virus and therefore the risk of smallpox happening again can never be eradicated.” Thus, the goal of a variola virus-free future, however laudable, is unattainable. This is sobering guidance on a topic that requires sober consideration.

We welcome increased discussions of the risk of infectious disease and of public health preparedness. In the US these topics have too long languished among second (or third) tier national security conversations. The 2014 West Africa Ebola outbreak and the current Congressional debate over funding to counter the Zika virus exemplifies the business-as-usual political approach of throwing half a bucket of water on the nearest burning bush while the surrounding countryside goes up in flames. Lethal infectious diseases are serious public health and global security issues and they deserve serious attention.

The variola virus has killed more humans numerically than any other single cause in history. This pathogen was produced by nature, and it would be the height of arrogance, and very foolish indeed, to assume nothing like it will ever again emerge from the bush to threaten human life and human civilization. Maintenance of variola virus stocks is needed for continued improvement of molecular diagnostics, antivirals, and vaccines. Under no circumstances should we unilaterally cripple those efforts in the face of the most deadly infectious disease ever to plague humans. This is an easy mistake to avoid.

Mark Buller, PhD, was a Professor of Molecular Microbiology & Immunology at Saint Louis University School of Medicine, who passed away on February 24, 2017. Rob Carlson, PhD, is a Principal at the engineering and strategy firm Biodesic and a Managing Director of Bioeconomy Capital.

The authors served as scientific and technical advisors to the 2015 WHO Scientific Working Group on Synthetic Biology and Variola Virus.

A Few Thoughts and References Re Conservation and Synthetic Biology

Yesterday at Synthetic Biology 7.0 in Singapore, we had a good discussion about the intersection of conservation, biodiversity, and synthetic biology. I said I would post a few papers relevant to the discussion, which are below.

These papers are variously: the framing document for the original meeting at the University of Cambridge in 2013 (see also "Harry Potter and the Future of Nature"), sponsored by the Wildlife Conservation Society; follow on discussions from meetings in San Francisco and Bellagio; and my own efforts to try to figure out how quantify the economic impact of biotechnology (which is not small, especially when compared to much older industries) and the economic damage from invasive species and biodiversity loss (which is also not small, measured as either dollars or jobs lost). The final paper in this list is my first effort to link conservation and biodiversity with economic and physical security, which requires shifting our thinking from the national security of nation states and their political boundaries to the natural security of the systems and resources that those nation states rely on for continued existence.

"Is It Time for Synthetic Biodiversity Conservation?", Antoinette J. Piaggio1, Gernot Segelbacher, Philip J. Seddon, Luke Alphey, Elizabeth L. Bennett, Robert H. Carlson, Robert M. Friedman, Dona Kanavy, Ryan Phelan, Kent H. Redford, Marina Rosales, Lydia Slobodian, Keith WheelerTrends in Ecology & Evolution, Volume 32, Issue 2, February 2017, Pages 97–107

Robert Carlson, "Estimating the biotech sector's contribution to the US economy", Nature Biotechnology, 34, 247–255 (2016), 10 March 2016

Kent H. Redford, William Adams, Rob Carlson, Bertina Ceccarelli, “Synthetic biology and the conservation of biodiversity”, Oryx, 48(3), 330–336, 2014.

"How will synthetic biology and conservation shape the future of nature?", Kent H. Redford, William Adams, Georgina Mace, Rob Carlson, Steve Sanderson, Framing Paper for International Meeting, Wildlife Conservation Society, April 2013.

"From national security to natural security", Robert Carlson, Bulletin of the Atomic Scientists, 11 Dec 2013.

Tim Cook is Defending Your Brain

Should the government have the right to troll through your thoughts and memories? That seems like a question for a "Minority Report" or "Matrix" future, but legal precedent is being set today. This is what is really at stake in an emerging tussle between Washington DC and Silicon Valley.

The Internets areall abuzz with Apple's refusal to hack an iPhone belonging to an accused terrorist. The FBI has served a court order on Apple, based on the All Writs Act of 1789, requiring Apple to break the lock that limits the number of times a passcode can be tried. Since law enforcement has been unable to crack the security of iOS on its own, it wants Apple to write special software to do the job. Here is Wired's summary. This NYT story has additional good background. The short version: should law enforcement and intelligence agencies be able to compel corporations to hack devices owned by citizens and entrusted with their sensitive information? 

Apple CEO Tim Cook published a letter saying no, thank you, because weakening the security of iPhones would be bad for his customers and "has implications far beyond the legal case at hand". Read Cook's letter; it is thoughtful. The FBI says it is just about this one phone and "isn't about trying to set a precedent," in the words of FBI Director James Comey. But this language is neither accurate nor wise — and it is important to say so.

Once the software is written, the U.S. government can hardly argue it will never be used again, nor that it will never be stolen off government servers. And since the point of the hack is to be able to push it onto a phone without consent (which is itself a backdoor that needs closing), this software would allow breaking the locks on any susceptible iPhone, anywhere. Many commentators have observed that any effort to hack iOS this once would facilitate repetitions, and any general weakening of smartphone security could easily be exploited by governments or groups less concerned about due process, privacy, or human rights. (And you do have to wonder whether Tim Cook's position here is influenced by his experience as a gay man, a demographic that has been persecuted, if not actually prosecuted, merely for thought and intent by the same organization now sitting on the other side of the table. He knows a thing or two about privacy.)  U.S. Senator Ron Wyden has a nice take on these issues. Yet while these are critically important concerns for modern life, they are shortsighted. There is much more at stake here than just one phone, or even the fate of a one particular company. The bigger, longer term issue is whether governments should have access to electronic devices that we rely on in daily life, particularly when those devices are becoming extensions of our bodies and brains. Indeed, these devices will soon be integrated into our bodies — and into our brains.

Hacking electronically-networked brains sounds like science fiction. That is largely because there has been so much science fiction produced about neural interfaces, Matrices, and the like. We are used to thinking of such technology as years, or maybe decades, off. But these devices are already a reality, and will only become more sophisticated and prevalent over the coming decades. Policy, as usual, is way behind.

My concern, as usual, is less about the hubbub in the press today and instead about where this all leads in ten years. The security strategy and policy we implement today should be designed for a future in which neural interfaces are commonplace. Unfortunately, today's politicians and law enforcement are happy to set legal precent that will create massive insecurity in just a few years. We can be sure that any precedent of access to personal electronic devices adopted today, particularly any precedent in which a major corporation is forced to write new software to hack a device, will be cited at least decades hence, when technology that connects hardware to our wetware is certain to be common. After all, the FBI is now proposing that a law from 1789 applies perfectly well in 2016, allowing a judge to "conscript Apple into government service", and many of our political representatives appear delighted to concur. A brief tour of current technology and security flaws sets the stage for how bad it is likely to get.

As I suggested a couple of years ago, hospital networks and medical devices are examples of existing critical vulnerabilities. Just in the last week hackers took control of computers and devices in a Los Angeles hospital, and only a few days later received a ransom to restore access and functionality. We will be seeing more of this. The targets are soft, and when attacked they have little choice but to pay when patients' health and lives are on the line. What are hospitals going to do when they are suddenly locked out of all the ventilators or morphine pumps in the ICU? Yes, yes, they should harden their security. But they won't be fully successful, and additional ransom events will inevitably happen. More patients will be exposed to more such flaws as they begin to rely more on medical devices to maintain their health. Now consider where this trend is headed: what sorts of security problems will we create by implanting those medical devices into our bodies?

Already on the market are cochlear implants that are essentially ethernet connections to the brain, although they are not physically configured that way today. An external circuit converts sound into signals that directly stimulate the auditory nerves. But who holds the password for the hardware? What other sorts of signals can be piped into the auditory nerve? This sort of security concern, in which networked electronics implanted in our bodies create security holes, has actually been with us for more than a decade. When serving as Vice President, Dick Cheney had the wireless networking on his fully-implanted heart defibrillator disabled because it was perceived as a threat. The device contained a test mode that could exploited to fully discharge the battery into the surrounding tissue. This might be called a fatal flaw. And it will only get worse.

DARPA has already limited the strength of a recently developed, fully articulated bionic arm to "human normal" precisely because the organization is worried about hacking. These prosthetics are networked in order to tune their function and provide diagnostic information. Hacking is inevitable, by users interested in modifications and by miscreants interested in mischief.

Not content to replace damaged limbs, within the last few months DARPA has announced a program to develop what the staff sometimes calls a "cortical modem". DARPA is quite serious about developing a device that will provide direct connections between the internet and the brain. The pieces are coming together quickly. Several years ago a patient in Sweden received a prosthesis grafted to the bone in his arm and controlled by local neural signals. Last summer I saw Gregoire Courtine show video of a monkey implanted with microfabricated neural bridge that spanned a severed spinal cord; flip a switch on and the monkey could walk, flip it off and the monkey was lame. Just this month came news of an implanted cortical electrode array used to directly control a robot arm. Now, imagine you have something like this implanted in your spine or head, so that you can walk or use an arm, and you find that the manufacturer was careless about security. Oops. You'll have just woken up — unpleasantly — in a William Gibson novel. And you won't be alone. Given the massive medical need, followed closely by the demand for augmentation, we can expect rapid proliferation of these devices and accompanying rapid proliferation of security flaws, even if today they are one-offs. But that is the point; as Gibson has famously observed, "The future is already here — it's just not evenly distributed yet."

When — when — cortical modems become an evenly distributed human augmentation, they will inevitably come with memory and computational power that exceeds the wetware they are attached to. (Otherwise, what would be the point?) They will expand the capacity of all who receive them. They will be used as any technology is, for good and ill. Which means they will be targets of interest by law enforcement and intelligence agencies. Judges will be grappling with this for decades: where does the device stop and the human begin? ("Not guilt by reason of hacking, your honor." "I heard voices in my head.") And these devices will also come with security flaws that will expose the human brain to direct influence from attackers. Some of those flaws will be accidents, bugs, zero-days. But how will we feel about back doors built in to allow governments to pursue criminal or intelligence investigations, back doors that lead directly into our brains? I am profoundly unimpressed by suggestions that any government could responsibly use or look after keys to any such back door.

There are other incredibly interesting questions here, though they all lead to the same place. For example, would neural augmentation count as a medical device? If so, what does the testing look like? If not, who will be responsible for guaranteeing safety and security? And I have to wonder, given the historical leakiness of backdoors, if governments insist on access to these devices who is going to want to accept liability inherent in protecting access to customers' brains? What insurance or reinsurance company would issue a policy indemnifying a cortical modem with a known, built-in security flaw? Undoubtably an insurance policy can be written that exempts governments from responsibility for the consequences of using a backdoor, but how can a government or company guarantee that no one else will exploit the backdoor? Obviously, they can do no such thing. Neural interfaces will have to be protected by maximum security, otherwise manufacturers will never subject themselves to the consequent product liability.

Which brings us back to today, and the precedent set by Apple in refusing to make it easy for the FBI to hack an iPhone. If all this talk of backdoors and golden keys by law enforcement and politicians moves forward to become precedent by default, or is written into law, we risk building security holes into even more devices. Eventually, we will become subject to those security holes in increasingly uncomfortable, personal ways. That is why it is important to support Tim Cook as he defends your brain.

 

70 Years After Hiroshima: "No government is well aware of the economic importance of biotechnology"

I was recently interviewed by Le Monde for a series on the impact of Hiroshima on science and science policy, with a particular focus on biotechnology, synthetic biology, and biosecurity. Here is the story in French. Since the translation via Google is a bit cumbersome to read, below is the English original.

Question 1

On the 16 of July 1945, after the first nuclear test at large scale in New Mexico (called trinity) the American physicist Kenneth Bainbridge, head of the shooting, told Robert Oppenheimer, head of the Manhattan Project, "Now we are all sons of bitches ".

In your discipline, do you feel that the time the searchers might have the same revelation has been reached ? Will it be soon?

I think this analogy does not apply to biotechnology. It is crucially important to distinguish between weapons developed in a time of war and the pursuit of science and technology in a time of peace. Over the last thirty years, biotechnology has emerged as a globally important technology because it is useful and beneficial. 

The development and maintenance of biological weapons is internationally outlawed, and has been for decades. The Trinity test, and more broadly the Manhattan Project, was a response to what the military and political leaders of the time considered an existential threat. These were actions taken in a time of world war. The scientists and engineers who developed the U.S. bombs were almost to a person ambivalent about their roles – most saw the downsides, yet were also convinced of their responsibility to fight against the Axis Powers. Developing nuclear weapons was seen as imperative for survival.

The scale of the Manhattan Project (both in personnel and as a fraction of GDP) was unprecedented, and remains so. In contrast to the exclusive governmental domain of nuclear weapons, biotechnology has been commercially developed largely with private funds. The resulting products – whether new drugs, new crop traits, or new materials – have clear beneficial value to our society.

Question 2

Do you have this feeling in other disciplines? Which ones ? Why?

No. There is nothing in our experience like the Manhattan Project and nuclear weapons. It is easy to point to the participants’ regrets, and to the long aftereffects of dropping the bomb, as a way to generate debate about, and fear of, new technologies. The latest bugaboos are artificial intelligence and genetic engineering. But neither of these technologies – even if they can be said to qualify as mature technologies – is even remotely as impactful as nuclear weapons.

Question 3

What could be the impact of a "Hiroshima" in your discipline?

In biosecurity circles, you often hear discussion of what would happen if there were “an event”. It is often not clear what that event might be, but it is presumed to be bad. The putative event could be natural or it could be artificial. Perhaps the event might kill many people as Hiroshima. (Though that would be hard, as even the most deadly organisms around today cannot wipe out populated cities in an instant.) Perhaps the event would be the intentional use of a biological weapon, and perhaps that weapon would be genetically modified in some way to enhance its capabilities. This would obviously be horrible. The impact would depend on where the weapon came from, and who used it. Was it the result of an ongoing state program? Was it a sample deployed, or stolen, from discontinued program? Or was it built and used by a terrorist group? A state can be held accountable by many means, but we are finding it challenging to hold non-state groups to account. If the organism is genetically modified, it is possible that there will be pushback against the technology. But biotechnology is producing huge benefits today, and restrictions motivated by the response to an event would reduce those benefits. It is also very possible that biotechnology will be the primary means to provide remedies to bioweapons (probably vaccines or drugs), in which case an event might wind up pushing the technology even faster.

Question 4

After 1945, physicists, including Einstein, have committed an ethical reflection on their own work. has your discipline done the same ? is it doing the same today ?

Ethical reflection has been built into biotechnology from its origins. The early participants met at Asilomar to discuss the implications of their work. Today, students involved in the International Genetically Engineered Machines (iGEM) competition are required to complete a “policy and practices” (also referred to as “ethical, legal, and social implications” (ELSI)) examination of their project. This isn’t window dressing, by any means. Everyone takes it seriously. 

Question 5

Do you think it would be necessary to rase the public awarereness about the issues related to your work?

Well, I’ve been writing and speaking about this issue for 15 years, trying to raise awareness of biotechnology and where it is headed. My book, “Biology is Technology”, was specifically aimed at encouraging public discussion. But we definitely need to work harder to understand the scope and impact of biotechnology on our lives. No government measures very well the size of the biotechnology industry – either in terms of revenues or in terms of benefits – so very few people understand how economically pervasive it is already. 

Question 6

What is, according to you, the degree of liberty of scientists face to political and industrial powers that will exploit the results of the scientific works?

Scientists face the same expectation of personal responsibility as every other member of the societies to which they belong. That’s pretty simple. And most scientists are motivated by ideals of truth, the pursuit of knowledge, and improving the human condition. That is one reason why most scientists publish their results for others to learn from. But it is less clear how to control scientific results after they are published. I would turn your question in another direction, and say politicians and industrialists should be responsible for how they use science, rather than putting this all on scientists. If you want to take this back to the bomb, the Manhattan Project was a massive military operation in a time of war, implemented by both government and the private sector. It relied on science, to be sure, but it was very much a political and industrial activity – you cannot divorce these two sides of the Project.

Question 7

Do you think about accurate measures [?] to prevent further Hiroshima?

I constantly think about how to prevent bad things from happening. We have to pay attention to how new technologies are developed and used. That is true of all technologies. For my part, I work domestically and internationally to make sure policy makers understand where biotechnology is headed and what it can do, and also to make sure it is not misused. 

But I think the question is rather off target. Bombing Hiroshima was a conscious decision made by an elected leader in a time of war. It was a very specific sort of event in a very specific context. We are not facing any sort of similar situation. If the intent of the question is to make an analogy to intentional use of biological weapons, these are already illegal, and nobody should be developing or storing them under any circumstances. The current international arms control regime is the way to deal with it. If the intent is to allude to the prevention of “bad stuff”, then this is something that every responsible citizen should be doing anyway. All we can do is pay attention and keep working to ensure that technologies are not used maliciously.

Brewing Bad Biosecurity Policy

Last week brought news of a truly interesting advance in porting opioid production to yeast. This is pretty cool science, because it involves combining enzymes from several different organisms to produce a complex and valuable chemical, although no one has yet managed to integrate the whole synthetic pathway in microbes. It is also potentially pretty cool economics, because implementing opiate production in yeast should dramatically lower the price of a class of important pain medications to a point that developing countries might finally be able to afford.

Alongside the scientific article was a Commentary – with images of drug dens and home beer brewing – explicitly suggesting that high doses of morphine and other addictive narcotics would soon be brewed at home in the garage. The text advertised “Home-brew opiates” – wow, just like beer! The authors of the Commentary used this imagery to argue for immediate regulation of 1) yeast strains that can make opioids (even though no such strains exist yet), and 2) the DNA sequences that code for the opioid synthesis pathways. This is a step backward for biosecurity policy, by more than a decade, because the proposal embraces measures known to be counterproductive for security.

The wrong recipe.

I'll be very frank here – proposals like this are deep failures of the science policy enterprise. The logic that leads to “must regulate now!” is 1) methodologically flawed and 2) ignores data we have in hand about the impacts of restricting access to technology and markets. In what follows, I will deal in due course with both kinds of failures, as well as looking at the predilection to assume regulation and restriction should be the primary policy response to any perceived threat.

There are some reading this who will now jump to “Carlson is yet again saying that we should have no regulation; he wants wants everything to be available to anyone.” This is not my position, and never has been. Rather, I insist that our policies be grounded in data from the real world. And the real world data we have demonstrates that regulation and restriction often cause more harm than good. Moreover, harm is precisely the impact we should expect by restricting access to democratized biological technologies. What if even simple analyses suggests that proposed actions are likely to make things worse? What if the specific policy actions recommended in response to a threat have already been shown to exacerbate damage from the threat? That is precisely the case here. I am constantly confronted with people saying, "That's all very well and good, but what do you propose we do instead?" The answer is simple: I don't know. Maybe nothing. Maybe there isn't anything we can do. But for now, I just want us to not make things worse. In particular I want to make sure we don't screw up the emerging bioeconomy by building in perverse incentives for black markets, which would be the worst possible development for biosecurity.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don't know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This brings me to the set up. Several news pieces (e.g., the NYT, Buzzfeed) succinctly pointed out that the “home-brew” language was completely overblown and inflammatory, and that the Commentary largely ignored both the complicated rationale for producing opioids in yeast and the complicated benefits of doing so. The Economist managed to avoid getting caught up in discussing the Commentary, remaining mostly focussed on the science, while in the last paragraph touching on the larger market issues and potential future impacts of “home brew opium” to pull the economic rug out from under heroin cartels. (Maybe so. It's an interesting hypothesis, but I won't have much to say about it here.) Over at Biosecu.re, Piers Millet – formerly of the Biological Weapons Convention Implementation Support Unit – calmly responded to the Commentary by observing that, for policy inspiration, the authors look backward rather than forward, and that the science itself demonstrates the world we are entering requires developing completely new policy tools to deal with new technical and economic realities.

Stanford's Christina Smolke, who knows a thing or two about opioid production in yeast, observed in multiple news outlets that getting yeast to produce anything industrially at high yields is finicky to get going and then hard to maintain as a production process. It's relatively easy to produce trace amounts of lots of interesting things in microbes (ask any iGEM team); it is very hard and very expensive to scale up to produce interesting amounts of interesting things in microbes (ask any iGEM team). Note that we are swimming in data about how hard this is to do, which is an important part of this story. In addition to the many academic examples of challenges in scaling up production, the last ten years are littered with startups that failed at scale up. The next ten years, alas, will see many more.

Even with an engineered microbial strain in hand, it can be extraordinarily hard to make a microbe jump through the metabolic and fermentation hoops to produce interesting/useful quantities of a compound. And then transferring that process elsewhere is very frequently its own expensive and difficult effort. It is not true that you can just mail a strain and a recipe from one place to another and automatically get the same result. However, it is true that all this will get easier over time, and many people are working on reproducible process control for biological production.

That future looks amazing. I've written many times about how the future of the economy looks like beer and cows – in other words, that our economy will inevitably be based on distributed biological manufacturing. But that is the future: i.e., not the present. Nor is it imminent. I truly wish it were imminent, but it is not. Whole industries exist to solve these problems, and much more money and effort will be spent before we get there. The economic drivers are huge. Some of the investments made by Bioeconomy Capital are, in fact, aimed at eventually facilitating distributed biological manufacturing. But, if nothing else, these investments have taught me just how much effort is required to reach that goal. If anybody out there has a credible plan to build the Cowborg or to microbrew chemicals and pharmaceuticals as suggested by the Commentary, I will be your first investor. (I said “credible”! Don't bother me otherwise.) But I think any sort of credible plan is years away. For the time being, the only thing we can expect to brew like beer is beer.

FBI Supervisory Special Agent Ed You makes great use of the “brewing bad” and “baking bad” memes, mentioned in the Commentary, in talking to students and professionals alike about the future of drug production. But this is in the context of taking personal responsibility for your own science and for speaking up when you see something dangerous. I've never heard Ed say anything about increasing surveillance and enforcement efforts as the way forward. In fact, in the Times piece, Ed specifically says, “We’ve learned that the top-down approach doesn’t work.” I can't say exactly why Ed chose that turn of phrase, but I can speculate that it is based 1) on his own experience as a professional bench molecular biologist, 2) the catastrophically bad impacts of the FBI's earlier arrests and prosecutions of scientists and artists for doing things that were legal, and 3) the official change in policy from the White House and National Security Council away from suppression and toward embracing and encouraging garage biology. The standing order at the FBI is now engagement. In fact, Ed You's arrival on the scene was coincident with any number of positive policy changes in DC, and I am happy to give him all the credit I can. Moreover, I completely agree with Ed and the Commentary authors that we should be discussing early on the implications of new technologies, an approach I have been advocating for 15 years. But I completely disagree with the authors that the current or future state of the technology serves as an indicator of the need to prepare some sort of regulatory response. We tried regulating fermentation once before; that didn't work out so well [1]. 

Badly baked regulatory policy.

So now we're caught up to about the middle of the Commentary. At this point, the story is like other such policy stories. “Assume hypothetical thing is inevitable: discuss and prepare regulation.” And like other such stories, here is where it runs off the rails with a non sequitur common in policy work. Even if the assumption of the thing's inevitability is correct (which is almost always debatable), the next step should be to assess the impact of the thing. Is it good, or is it bad? (By a particular definition of good and bad, of course, but never mind that for now.) Usually, this question is actually skipped and the thing is just assumed to be bad and in need of a policy remedy, but the assumption of badness, breaking or otherwise, isn't fatal for the analysis.

Let's say it looks bad – bad, bad, bad – and the goal of your policy is to try to either head it off or fix it. First you have to have some metric to judge how bad it is. How many people are addicted, or how many people die, or how is the crime rate affected? Just how bad is it breaking? Next – and this is the part the vast majority of policy exercises miss – you have to try to understand what happens in the absence of a policy change. What is the cost of doing nothing, of taking no remediating action? Call this the null hypothesis. Maybe there is even a benefit to doing nothing. But only now, after evaluating the null hypothesis, are you in a position to propose remedies, because only now you have a metric to compare costs and benefits. If you leap directly to “the impacts of doing nothing are terrible, so we must do something, anything, because otherwise we are doing nothing”, then you have already lost. To be sure, policy makers and politicians feel that their job is to do something, to take action, and that if they are doing nothing then they aren't doing their jobs. That is just a recipe for bad policy. Without the null hypothesis, your policy development is a waste of time and, potentially, could make matters worse. This happens time and time again. Prohibition, for example, was exactly this sort of failure, and cost much more than it benefited, which is why it was considered a failure [2].

We keep making the same mistake. We have plenty of data and reporting, courtesy of the DEA, that the ongoing crackdown on methamphetamine production has created bigger and blacker markets, as well as mayhem and violence in Mexico, all without much impact on domestic drug use. Here is the DEA Statistics & Facts page – have a look and then make up your own mind.

I started writing about the potential negative impacts of restricting access to biological technologies in 2003 (PDF), including the likely emergence of black markets in the event of overregulation. I looked around for any data I could find on the impacts of regulating democratized technologies. In particular, I happened upon the DEA's first reporting of the impacts of the then newly instituted crackdown on domestic methamphetamine production and distribution. Even in 2003, the DEA was already observing that it had created bigger, blacker markets – that are by definition harder to surveil and disrupt – without impacting meth use. The same story has played out similarly in cocaine production and distribution, and more recently in the markets for “bath salts”, aka “legal highs”

That is, we have multiple, clear demonstrations that, rather than improving the world, restricting access to distributed production can instead cause harm. But, really, when has this ever worked? And why do people think going down the same path in the future will lead anywhere else? I am still looking for data – any data at all – that supports the assertion that regulating biological technologies will have any different result. If you have such data, bring it. Let's see it. In that absence of that data, policy proposals that lead with regulation and restriction are doomed to repeat the failures of the past. It has always seemed to me like a terrible idea to transfer such policies over to biosecurity. Yet that is exactly what the Commentary proposes.

Brewing black markets.

The fundamental problem with the approach advocated in the Commentary is that security policies, unlike beer brewing, do not work equally well across all technical and economic scales. What works in one context will not work in another. Nuclear weapons can be secured by guns, gates, and guards because they are expensive to build and the raw materials are hard to come by, so heavy touch regulation works just fine. There are some industries – as it happens, beer brewing – where only light touch regulation works. In the U.S., we tried heavy touch regulation in the form of Prohibition, and it failed miserably, creating many more problems than it solved. There are other industries, for example DNA and gene synthesis, in which even light touch regulations are a bad idea. Indeed, light touch regulation of has already created the problem it was supposed to prevent, namely the existence of DNA synthesis providers that 1) intentionally do not screen their orders and 2) ship to countries and customers that are on unofficial black lists.

For those who don't know this story: In early 2013, the International Council for the Life Sciences (ICLS) convened a meeting in Hong Kong to discuss "Codes of Conduct" for the DNA synthesis industry, namely screening orders and paying attention to who is doing the ordering. According to various codes and guidelines promulgated by industry associations and the NIH, DNA synthesis providers are supposed to reject orders that are similar to sequences that code for pathogens, or genes from pathogens, and it is suggested that they do not ship DNA to certain countries or customers (the unofficial black list). Here is a PDF of the meeting report; be sure to read through Appendix A.

The report is fairly anodyne in describing what emerged in discussions. But people who attended have since described in public the Chinese DNA synthesis market as follows. There are 3 tiers of DNA providers. The first tier is populated with companies that comply with the various guidelines and codes promulgated internationally because this tier serves international markets. There is a second tier that appears to similarly comply, because while they serve primarily the large internal market these companies have aspirations of also serving the international market. There is a third tier that exists specifically to serve orders from customers seeking ways around the guidelines and codes. (One company in this tier was described to me as a "DNA shanty", with the employees living over the lab.) Thus the relatively light touch guidelines (which are not laws) have directly incentivized exactly the behavior they were supposed to prevent. This is not a black market, per se, and cannot be accurate described as illegal, so let's call it a "grey market".

I should say here that this is entirely consistent with my understanding of biotech in China. In 2010, I attended a warm up meeting for the last round of BWC negotiations. After that meeting, I chatted with one of the Chinese representatives present, hoping to gain a little bit of insight into the size of the Chinese bioeconomy and the state of the industry. My query was met with frank acknowledgment that the Chinese government isn't able to keep track of the industry, does't know how many companies are active, or how many employees they have, or what they are up to, and so doesn't hold out much hope of controlling the industry. I covered this a bit in my 2012 Biodefense Net Assessment report for DHS. (If anyone has any new insight into the Chinese biotech industry, I am all ears.) Not that the U.S. or Europe is any better in this regard, as our mechanisms for tracking the biotech industry are completely dysfunctional, too. There could very well be DNA synthesis providers operating elsewhere that don't comply with the recommended codes of conduct: we have no real means of broadly surveying for this behavior. There are no physical means either to track it remotely or to control it.

I am a little bit sensitive about the apparent emergence of the DNA synthesis grey market, because I warned for years in print and in person that DNA screening would create exactly this outcome. I was condescendingly told on many occasions that it was foolish to imagine a black market for DNA. And then we have to do something, right? But it was never very complicated to think this through. DNA is cheap, and getting cheaper. You need this cheap DNA as code to build more complicated, more valuable things. Ergo, restrictions on DNA synthesis will incentivize people to seek, and to provide, DNA outside any control mechanism. The logic is pretty straightforward, and denying it is simply willful self-deception. Regulation of DNA synthesis will never work. In the vernacular of the day: because economics. To make it even simpler: because humans.

So the idea that people are still suggesting proscription of certain DNA sequences is a viable route to security just rankles. And it is demonstrably counterproductive. The restrictions incentivize the bad behavior they are supposed to prevent, probably much earlier than might have happened otherwise. The take home message here is that not all industries are the same, because not all technologies are the same, and that our policy approaches should take into account these differences rather than papering over them. In particular, restricting access to information in our modern economy is a losing game. 

Where do we go from here?

We are still at the beginning of biotech. This is the most important time to get it right. This is the most important time not to screw up and make things worse. And it is important that we are at the beginning, because things are not yet screwed up.

Conversely, we are well down the road in developing and deploying drug policies, with much damage done. To be sure, despite the accumulated and ongoing costs, you have to acknowledge that it is not at all clear that suddenly legalizing drugs such as meth or cocaine would be a positive step. I am not in any way making that argument. But it is abundantly clear that drug enforcement activities have created the world we live in today. Was there an alternative? If the DEA had been able to do cost/benefit analysis of the impacts of its actions – that is, predict the emergence of DTOs and their role in production, trafficking, and violence – would the policy response 15 years ago have been any different? If Nixon had more thoughtfully considered even what was known 50 years about about the impacts of proscription, would he have launched the war on drugs? That is a hard question, because drug policy is clearly driven more by stories and personal politics than by facts. I am inclined to think the present drug policy mess was inevitable. Even with the DEA's self-diagnosed role in creating and sustaining DTOs, the national conversation is still largely dominated by “the war on drugs”. And thus the first reaction to the prospect of microbial narcotics production is to employ strategies and tactics that have already failed elsewhere. I would hate to think we are in for a war on microbes, because that is doomed to failure.

But we haven't yet made all those mistakes with biological technologies. I continue to hope that, if nothing else, we will avoid making things worse by rejecting policies we already know won't work. 

Notes:

[1] Pause here to note that even this early in the set up, the Commentary conflates via words and images the use of yeast in home brew narcotics with centralized brewing of narcotics by cartels. These are two quite different, and are perhaps mutually exclusive, technoeconomic futures. Drug cartels very clearly have the resources to develop technology. Depending on whether you listen to the U.S. Navy or the U.S. Coast Guard, either 30% or 80% of the cocaine delivered to the U.S. is transported at some point in semisubmersible cargo vessels or in fully submersible cargo submarines. These 'smugglerines', if you will, are the result of specific technology development efforts directly incentivized by governmental interdiction efforts. Similarly, if cartels decide that developing biological technologies suits their business needs, they are likely to do so. And cartels certainly have incentives to develop opioid-producing yeast, because fermentation usually lowers the cost of goods between 50% and 90% compared to production in plants. Again, cartels have the resources, and they aren't stupid. If cartels do develop these yeast strains, for competitive reasons they certainly won't want anyone else to have them. Home brew narcotics would further undermine their monopoly.

[2] Prohibition was obviously the result of a complex socio-political situation, just as was its repeal. If you want a light touch look at the interaction of the teetotaler movement, the suffragette movement, and the utility of Prohibition in continued repression of freed slaves after the Civil War, check out Ken Burns's “Prohibition” on Netflix. But after all that, it was still a dismal failure that created more problems than it solved. Oh, and Prohibition didn't accomplish its intended aims. Anheuser-Busch thrived during those years. Its best selling products at the time were yeast and kettles (see William Knoedleseder's Bitter Brew)...

Biosecurity is Everyone's Business (Part 2)

(Here is Part 1.)

Part 2. From natural security to neural security

Humans are fragile. For most of history we have lived with the expectation that we will lose the use of organs, and some of us limbs, as we age or suffer injury. But that is now changing. Prostheses are becoming more lifelike and more useful, and replacement organs have been used to save lives and restore function. But how robust are the replacement parts? The imminent prospect of technological restoration of human organs and limbs lost to injury or disease is cause to think carefully about increasing both our biological capabilities and our technological fragilities.

Technology fails us for many reasons. A particular object or application may be poorly designed or poorly constructed. Constituent materials may be faulty, or maintenance may be shoddy. Failure can result from inherent security flaws, which can be exploited directly by those with sufficient technical knowledge and skill. Failure can also be driven by clever and conniving exploits of the overall system that focus on its weakest link, almost always the human user, by inducing them to make a mistake or divulge critical information. Our centuries of experience and documentation of such failures should inform our thinking about the security of emerging technologies, particularly as we begin to fuse biology with electronic systems. The growing scope of biotechnology will therefore require constant reassessment of what vulnerabilities we are introducing through that expansion. Examining the course of other technologies provides some insight into the future of biology.

We carry powerful computers in our pockets, use the internet to gather information and access our finances, and travel the world in aircraft that are often piloted and landed by computers. We are told we can trust this technology with our financial information, our identities and social networks, and, ultimately, our lives. At the same time, technology is constantly shown to be vulnerable and fragile at a non-trivial rate -- resulting in identity theft, financial loss, and sometimes personal injury and death. We embrace technology despite well-understood risks; automobiles, electricity, fossil fuels, automation, and bicycles all kill people every day in predictable numbers. Yet we continue to use technology, integrating it further into multiple arenas in our lives, because we decide that the benefits outweigh risks.

Healthcare is one arena in which risks are multiplying. The IT security community has for some years been aware of network vulnerabilities in medical devices such as pacemakers and implantable defibrillators. The ongoing integration of networked medical devices in health care settings, an integration that is constantly introducing both new capabilities and new vulnerabilities, is now the focus of extensive efforts to improve security. The impending introduction of networked, semi-autonomous prostheses raises obvious similar concerns. Wi-fi enabled pacemakers and implantable defibrillators are just the start, as soon we will see bionic arms, legs, and eyes with network connections that allow performance monitoring and tuning.

Eventually, prostheses will not simply restore "human normal" capabilities, they will also augment human performance. I learned recently that DARPA explicitly chose to limit the strength of its robotic arm, but that can't last: science fiction, super robotic strength is coming. What happens when hackers get ahold of this technology? How will people begin to modify themselves and their robotic appendages? And, of course, the flip side of having enhanced physical capabilities is having enhanced vulnerabilities. By definition, tuning can improve or degrade performance, and this raises an important security question: who holds the password for your shiny new arm? Did someone remember to overwrite the factory default password? Is the new password susceptible to a dictionary attack? The future brings even more concerns.  Control connections to a prosthesis are bi-directional and, as the technology improves, ever better neural interfaces will eventually jack these prostheses directly into the brain. "Tickling" a robotic limb could take on a whole new meaning, providing a means to connect various kinds of external signals to the brain in new ways.

Beyond limbs, we must also consider neural connections that serve to open entirely novel senses. It is not a great leap to envision a wide range of ensuing digital-to-neural input/output devices. These technologies are evolving at a rapid rate, and through them we are on the cusp of opening up human brains to connections with a wide range of electromechanical hardware capabilities and, indeed, all the information on the internet.

Just this week saw publication of a cochlear implant that delivers a gene therapy to auditory neurons, promoting the formation of electrical connections with the implant and thereby dramatically improving the hearing response of test animals. We are used to the idea of digital music files being converted by speakers into sound waves, which enter the brain through the ear. But the cochlear implant is basically an ethernet connection wired to your auditory nerve, which in principal means any signal can be piped into your brain. How long can it be before we see experiments with a cochlear (or other) implant that enables direct conversion of arbitrary digital information into neural signals? At that point, "hearing" might extend into every information format. So, again we must ask, who holds the password to your brain implant

Hacking the Bionic Man

As this technology is deployed in the population it is clear that there can be no final and fixed security solution. Most phone and computer users are now all too aware that new hardware, firmware, and operating systems always introduce new kinds of risks and threats. The same will be true of prostheses. The constant rat race to chase down security holes in new products upgrades will soon extend directly into human brains. As more people are exposed to medical device vulnerabilities, security awareness and improvement must become an integrated part of medical practice. This discussion can be easily extended to potential vulnerabilities that will arise from the inevitable integration into human bodies of not just electromechanical devices, but of ever more sophisticated biological technologies. The exploration of prosthesis security, loosely defined, gives some indication of the scope of the challenge ahead.

The class of things we call prostheses will soon expand beyond electromechanical devices to encompass biological objects such as 3D printed tissues and lab-grown organs. As these cell-based therapies begin to enter human clinical trials, we must assess the security of both the therapies themselves and the means used to create and administer them. If replacement organs and tissues are generated from cells derived from donors, what vulnerabilities do the donors have? How are those donor vulnerabilities passed along to the recipients? Yes, you have an immune system that does wonders most of the time. But are your natural systems up to the task of handling the biosecurity of augmented organs?

What does security even mean in this context? In addition to standard patient work-ups, should we begin to fully sequence the genomes of donor tissues, first to identify potential known health issues, and then to build a database that can be re-queried as new genetic links to disease are discovered? Are there security holes in the 3D printers and other devices used to manipulate cells and tissues? What are the long term security implications of deploying novel therapeutic tissues in large numbers of military and civilian personnel? What are the long-term security implications of using both donor and patient tissue as seeds of induced pluripotent stem cells, or of differentiating any stem cell line for use in therapies? Do we fully understand the complement of microbes and genomes that may be present in donor samples, or lying dormant in donor genomes, or that may be introduced via laboratory procedures and instruments used to process cells for use as therapies? What is the genetic security of a modified cell line or induced pluripotent stem cell? If there is a genetic modification embedded in your replacement heart tissue, where did the new DNA come from, and are you sure you know everything that it encodes? As with information technologies, we should expect that these new biological technologies will sometimes arrive with accidental vulnerabilities; they may also come with intentionally introduced back doors. The economic motivation to create new protheses, as well as to exploit vulnerabilities, will soon introduce market competition as a factor in biosecurity. 

Competition often drives perverse strategic decisions when it comes to security. Firms rush to sell hardware and software that are said to be secure, only to discover that constant updates are required to patch security holes. We are surrounded by products in endless beta. Worse yet, manufacturers have been known to sit on security holes in the naive hope that no one else will notice. Vendors sometimes appear no more literate about the security of hardware and software than are their customers. What will the world look like when eletromechanical and biological prostheses are similarly in constant states of upgrade? Who will you trust to build/print/grow a prosthesis? Are you going to place your faith in the FDA to police all these risks? (Really?) If you decide to instead place your faith in the market, how will you judge the trustworthiness of firms that sell aftermarket security solutions for your bionic leg or replacement liver?

The complexity of the task at hand is nearly overwhelming. Understanding the coming fusion of technologies will require competency in software, hardware, wetware, and security -- where are those skill sets being developed in a compatible, integrated manner? This just leads to more questions: Are there particular countries that will have a competitive advantage in this area? Are there particular countries that will be hotbeds of prosthesis malware creation and distribution?

The conception of security, whether of individuals or nation states, is going to change dramatically as we become ever more economically dependent upon the market for biological technologies. Given the spreading capability to participate and innovate in technology development, which inevitably amplifies the number and effect of vulnerabilities of all kinds, I suspect we need to re-envision at a very high level how security works.

[Coming soon: Part 3.]

 

Biosecurity is Everyone's Business (Part 1)

Part 1. The ecosystem is the enterprise

We live in a society increasingly reliant upon the fruits of nature. We consume those fruits directly, and we cultivate them as feedstocks for fuel, industrial materials, and the threads on our backs. As a measure of our dependence, revenues in the bioeconomy are rising rapidly, demonstrating a demand for biological products that is growing much faster than the global economy as a whole.

This demand represents an enormous market pull on technology development, commercialization, and, ultimately, natural resources that serve as feedstocks for biological production. Consequently, we must assess carefully the health and longevity of those resources. Unfortunately, it is becoming ever clearer that the natural systems serving to supply our demand are under severe stress. We have been assaulting nature for centuries, with the heaviest blows delivered most recently. Nature, in the most encompassing sense of the word, has been astonishingly resilient in the face of this assault. But the accumulated damage has cracked multiple holes in ecosystems around the globe. There are very clear economic costs to this damage -- costs that compound over time -- and the cumulative damage now poses a threat to the availability of the water, farmland, and organisms we rely on to feed ourselves and our economy.

I would like to clarify that I am not predicting collapse, nor that we will run out of resources; rather, I expect new technologies to continue increasing productivity and improving the human condition. Successfully developing and deploying those technologies will, obviously, further increase our economic dependency on nature. As part of that growing dependency, businesses that participate in the bioeconomy must understand and ensure the security of feedstocks, transportation links, and end use, often at a global scale. Consequently, it behooves us to thoroughly evaluate any vulnerabilities we are building into the system so that we can begin to prepare for inevitable contingencies.

Revisiting the definition of biosecurity: from national security to natural security, and beyond

Last year John Mecklin at Bulletin of the Atomic Scientists asked me to consider the security implications of the emerging conversation (or, perhaps, collision) between synthetic biology and conservation biology. This conversation started at a meeting last April at the University of Cambridge, and is summarized in a recent article in Oryx. What I came up with for BAS was an essay that cast very broadly the need to understand threats to all of the natural systems we depend on. Quantifying the economic benefit of those systems, and the risk inherent in our dependence upon them, led me directly to the concept of natural security.

Here I want to take a stab at expanding the conversation further. Rapidly rising revenues in the bioeconomy, and the rapidly expanding scope of application, must critically inform an evolving definition of biosecurity. In other words, because economic demand is driving technology proliferation, we must continually refine our understanding of what it is that we must secure and from where threats may arise.

Biosecurity has typically been interpreted as the physical security of individuals, institutions, and the food supply in the context of threats such as toxins and pathogens. These will, of course, continue to be important concerns: new influenza strains constantly emerge to cause human and animal health concerns; the (re?)emergent PEDS virus has killed an astonishing 10% of U.S. pigs this year alone; within the last few weeks there has been an alarming uptick in the number of human cases and deaths caused by MERS. Beyond these natural threats are pathogens created by state and non-state organizations, sometimes in the name of science and preparation for outbreaks, while sometimes escaping containment to cause harm. Yet, however important these events are, they are but pieces of a biosecurity puzzle that is becoming ever more complex.

Due to the large and growing contribution of the bioeconomy, no longer are governments concerned merely with the proverbial white powder produced in a state-sponsored lab, or even in a 'cave' in Afghanistan. Because economic security is now generally included in the definition of national security, the security of crops, drug production facilities, and industrial biotech will constitute an ever more important concern. Moreover, in the U.S., as made clear by the National Strategy for Countering Biological Threats(PDF), the government has established that encouraging the development and use of biological technologies in unconventional environments (i.e., "garages and basements") is central to national security. Consequently, the concept of biosecurity must comprise the entire value chain from academics and garage innovators, through production and use, to, more traditionally, the health of crops, farmanimals, and humans. We must endeavor to understand, and to buttress, fragility at every link in this chain.

Beyond the security of specific links in the bioeconomy value chain we must examine the explicit and implicit connections between them, because through our behavior we connect them. We transport organisms around the world; we actively breed plants, animals, and microbes; we create new objects with flaws; we emit waste into the world. It's really not that complicated. However, we often choose to ignore these connections because acknowledging them would require us to respect them, and consequently to behave differently. But that change in behavior must be the future of biosecurity. 

From an enterprise perspective, as we rely ever more heavily on biology in our economy, so must we comprehensively define 'biosecurity' to adequately encompass relevant systems. Vulnerabilities in those systems may be introduced intentionally or accidentally. An accidental vulnerability may lie undiscovered for years, as in the case of the recently disclosed Heartbleed hole in the OpenSSL internet security protocol, until it is identified, when it becomes a threat. The risk, even in open source software, is that the vulnerability may be identified by organizations which then exploit it before it becomes widely known. This is reported to be true of the NSA's understanding and exploitation of Heartbleed at least two years in advance of its recent public announcement. Our biosecurity challenge is to carefully, and constantly, assess how the world is changing and address shortcomings as we find them. It will be a transition every bit as painful as the one we are now experiencing for hardware and software security

(Here is Part 2.)