DNA Cost and Productivity Data, aka "Carlson Curves"

I have received a number of requests in recent days for my early DNA synthesis and productivity data, so I have decided to post it here for all who are interested. Please remember where you found it.

A bit of history: my efforts to quantify the pace of change in biotech started in the summer of 2000 while I was trying to forecast where the industry was headed. At the time, I was a Research Fellow at the Molecular Sciences Institute (MSI) in Berkeley, and I was working on what became the essay “Open Source Biology and Its Impact on Industry”, originally written in the summer of 2000 for the inaugural Shell/Economist World in 2050 Competition and originally titled “Biological Technology in 2050”. I was trying to conceive of where things were going many decades out, and gathering these numbers seemed like a good way to anchor my thinking. I had the first, very rough, data set by about September of 2000. I presented the curves that summer for the first time to an outside audience in the form of a Global Business Network (GBN) Learning Journey that stopped at MSI to see what we were up to. Among the attendees was Steward Brand, whom I understand soon started referring to the data as “Carlson Curves” in his own presentations. I published the data for the first time in 2003 in a paper with the title “The Pace and Proliferation of Biological Technologies”. Somewhere in there Ray Kurzweil started making reference to the curves, and then a 2006 article in The Economist, “Life 2.0”, brought them to a wider audience and cemented the name. It took me years to get comfortable with “Carlson Curves”, because, even if I did sort it out first, it is just data rather than a law of the universe. But eventually I got it through my thick skull that it is quite good advertising.

The data was very hard to come by when I started. Sequencing was still a labor intensive enterprise, and therefore highly variable in cost, and synthesis was slow, expensive, and relatively rare. I had to call people up to get their rough estimates of how much time and effort they were putting in, and also had to root around in journal articles and technical notes looking for any quantitative data on instrument performance. This was so early in the development of the field that, when I submitted what became the 2003 paper, one of the reviews came back with the criticism that the reviewer – certainly the infamous Reviewer Number 2 – was “unaware of any data suggesting that sequencing is improving exponentially”.

Well, yes, that was the first paper that collected such data.

The review process led to somewhat labored language in the paper asserting the “appearance” of exponential progress when comparing the data to Moore's Law. I also recall showing Freeman Dyson the early data, and he cast a very skeptical eye on the conclusion that there were any exponentials to be written about. The data was, in all fairness, a bit thin at the time. But the trend seemed clear to me, and the paper laid out why I thought the exponential trends would, or would not, continue. Steward Brand, and Drew Endy at the next lab bench over, grokked it all immediately, which lent some comfort that I wasn’t sticking my neck out so very far.

I've written previously about when the comparison with Moore's Law does, and does not, make sense. (Here, here, and here.) Many people choose to ignore the subtleties. I won't belabor the details here, other than to try to succinctly observe that the role of DNA in constructing new objects is, at least for the time being, fundamentally different than that of transistors. For the last forty years, the improved performance of each new generation of chip and electronic device has depended on those objects containing more transistors, and the demand for greater performance has driven an increase in the number of transistors per object. In contrast, the economic value of synthetic DNA is decoupled from the economic value of the object it codes for; in principle you only need one copy of DNA to produce many billions of objects and many billions of dollars in value.

To be sure, prototyping and screening of new molecular circuits requires quite a bit more than one copy of the DNA in question, but once you have your final sequence in hand, your need for additional synthesis for that object goes to zero. And even while the total demand for synthetic DNA has grown over the years, the price per base has on average fallen about as fast; consequently, as best as I can tell, the total dollar value of the industry hasn't grown much over the last ten years. This makes it very difficult to make money in the DNA synthesis business, and may help explain why so many DNA synthesis companies have gone bankrupt or been folded into other operations. Indeed, most of the companies that provided DNA or gene synthesis as a service no longer exist. Due to similar business model challenges it is difficult to sell stand alone synthesis instruments. Thus the productivity data series for synthesis instruments ends several years ago, because it is too difficult to evaluate the performance of proprietary instruments run solely by the remaining service providers. DNA synthesis is likely to remain a difficult business until there is a business model in which the final value of the product, whatever that product is, depends on the actual number of bases synthesized and sold. As I have written before, I think that business model is likely to be DNA data storage. But we shall see.

The business of sequencing, of course, is another matter. It's booming. But as far as the “Carlson Curves” go, I long ago gave up trying to track this on my own, because a few years after the 2003 paper came out the NHGRI started tracking and publishing sequencing costs. Everyone should just use that data. I do.

Finally, a word on cost versus price. For normal, healthy businesses, you expect the price of something to exceed its cost, and for the business to make at least a little bit of money. But when it comes to DNA, especially synthesis, it has always been difficult to determine the true cost because it has turned out that the price per base has frequently been below the cost, thereby leading those businesses to go bankrupt. There are some service operations that are intentionally run at negative margins in order to attract business; that is, they are loss leaders for other services, or in order to maintain sufficient scale so that the company can have access to that scale for its own internal projects. There are a few operations that appear to be priced so that they are at least revenue neutral and don't lose money. Thus there can be a wide range of prices at this point in time, which further complicates sorting out how the technology may be improving and what impact this has on the economics of biotech. Moreover, we might expect the price of synthetic DNA to *increase* occasionally, either because providers can no longer afford to lose money or because competition is reduced. There is no technological determinism here. Just as Moore's Law is ultimately a function of industrial planning and expectations, there is nothing about Carlson Curves that says prices must continuously fall monotonically.

A note on methods and sources: as described in the 2003 paper, this data was generally gathered by calling people up or by extracting what information I could from what little was written down and published at the time. The same is true for later data. The quality of the data is limited primarily by that availability and by how much time I could spend to develop it. I would be perfectly delighted to have someone with more resources build a better data set.

The primary academic references for this work are:

Robert Carlson, “The Pace and Proliferation of Biological Technologies”. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science. Sep, 2003, 203-214. http://doi.org/10.1089/153871303769201851.

Robert Carlson, “The changing economics of DNA synthesis”. Nat Biotechnol 27, 1091–1094 (2009). https://doi.org/10.1038/nbt1209-1091.

Robert Carlson, Biology Is Technology The Promise, Peril, and New Business of Engineering Life, Harvard University Press, 2011. Amazon.

Here are my latest versions of the figures, followed by the data. Updates and commentary are on the Bioeconomy Dashboard.

Creative Commons image licence (Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0)) terms: 

  • Share — copy and redistribute the material in any medium or format for any purpose, even commercially.

  • Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

  • NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.

Here is the cost data (units in [USD per base]):

Year DNA Sequencing Short Oligo (Column) Gene Synthesis
1990 25

1991


1992
1
1993


1994


1995 1 0.75
1996


1997


1998


1999

25
2000 0.25 0.3
2001

12
2002

8
2003 0.05 0.15 4
2004 0.025

2005


2006 0.00075 0.1 1
2007

0.5
2008


2009 8E-06 0.08 0.39
2010 3.17E-06 0.07 0.35
2011 2.3E-06 0.07 0.29
2012 1.6E-06 0.06 0.2
2013 1.6E-06 0.06 0.18
2014 1.6E-06 0.06 0.15
2015 1.6E-09

2016 1.6E-09 0.05 0.03
2017 1.6E-09 0.05 0.02

Here is the productivity data (units in [bases per person per day] and [number of transistors per chip]) — note that commercially available synthesis instruments were not sold new for the decade following 2011, and I have not sat down to figure out the productivity of any of the new boxes that may be for sale as of today:

year Reading DNA Writing DNA Transistors
1971

2250
1972

2500
1974

5000
1978

29000
1982

1.20E+05
1985

2.75E+05
1986 25600

1988

1.18E+06
1990
200
1993

3.10E+06
1994 62400

1996


1997 4.22E+05 15320
1998

7.50E+06
1999 576000
2.40E+07
2000
1.38E+05 4.20E+07
2001


2002


2003

2.20E+08
2004

5.92E+08
2005


2006 10000000

2007 200000000 2500000
2008

2000000000
2009 6000000000

2010 17000000000

2011

2600000000
2012 54000000000

Uncertainty in the Time of COVID-19, Part 2

Part 2: How Do We Know What We Know?

When a new pathogen first shows up to threaten human lives, ignorance dominates knowledge. The faster we retire our ignorance and maximize our knowledge, the better our response to any novel threat. The good news is that knowledge of what is happening during the current COVID-19 pandemic is accumulating more rapidly than it did during the SARS outbreak, in part because we have new tools available, and in part because Chinese clinicians and scientists are publishing more, and faster, than in 2003. And yet there is still a great deal of ignorance about this pathogen, and that ignorance breeds uncertainty. While it is true that the virus we are now calling SARS-CoV-2 is relatively closely related genetically to the SARS-CoV that emerged in 2002, the resulting disease we call COVID-19 is notably different than SARS. This post will dig into what methods and tools are being used today in diagnosis and tracking, what epidemiological knowledge is accumulating, and what error bars and assumptions are absent, being misunderstood, or are errant.

First, in all of these posts I will keep a running update of good sources of information. The Atlantic continues its excellent reporting into lack of testing in the US by digging into the decision-making process, or lack thereof, that resulted in our current predicament. I am finding it useful to read the China CDC Weekly Reports, which constitute source data and anecdotes used in many other articles and reports.

Before diving in any further, I would observe that it is now clear that extreme social distancing works to halt the spread of the virus, at least temporarily, as demonstrated in China. It is also clear that, with widespread testing, the spread can also be controlled with less severe restrictions — but only if you assay the population adequately, which means running tests on as many people as possible, not just those who are obviously sick and in hospital.

Why does any of this matter?

In what follows, I get down into the weeds of sources of error and of sampling strategies. I suggest that the way we are using tests is obscuring, rather than helping, our ability to understand what is happening. You might look at this, if you are an epidemiologist or public health person, and say that these details are irrelevant because all we really care about are actions that work to limit or slow the spread. Ultimately, as the goal is to save lives and reduce suffering, and since China has demonstrated that extreme social distancing can work to limit the spread of COVID-19, the argument might be that we should just implement the same measures and be done with it. I am certainly sympathetic to this view, and we should definitely implement measures to restrict the spread of the virus.

But it isn’t that simple. First, because the population infection data is still so poor, even in China (though perhaps not in South Korea, as I explore below) every statement about successful control is in actuality still a hypothesis, yet to be tested. Those tests will come in the form of 1) additional exposure data, such as population serology studies that identify the full extent of viral spread by looking for antibodies to the virus, which persist long after an infection is resolved, and 2) carefully tracking what happens when social distancing and quarantine measures are lifted. Prior pandemics, in particular the 1918 influenza episode, showed waves of infections that reoccured for years after the initial outbreak. Some of those waves are clearly attributable to premature reduction in social distancing, and different interpretations of data may have contributed to those decisions. (Have a look at this post by Tomas Pueyo, which is generally quite good, for the section with the heading “Learnings from the 1918 Flu Pandemic”.) Consequently, we need to carefully consider exactly what our current data sets are teaching us about SARS-CoV-19 and COVID-19, and, indeed, whether current data sets are teaching us anything helpful at all.

What is COVID-19?

Leading off the discussion of uncertainty are differences in the most basic description of the disease known as COVID-19. The list of observed symptoms — that is, visible impacts on the human body — from the CDC includes only fever, cough, and shortness of breath, while the WHO website list is more expansive, with fever, tiredness, dry cough, aches and pains, nasal congestion, runny nose, sore throat, or diarrhea. The WHO-China Joint Mission report from last month (PDF) is more quantitative: fever (87.9%), dry cough (67.7%), fatigue (38.1%), sputum production (33.4%), shortness of breath (18.6%), sore throat (13.9%), headache (13.6%), myalgia or arthralgia (14.8%), chills (11.4%), nausea or vomiting (5.0%), nasal congestion (4.8%), diarrhea (3.7%), and hemoptysis (0.9%), and conjunctival congestion (0.8%). Note that the preceding list, while quantitative in the sense that it reports the frequency of symptoms, is ultimately a list of qualitative judgements by humans.

The Joint Mission report continues with a slightly more quantitative set of statements:

Most people infected with COVID-19 virus have mild disease and recover. Approximately 80% of laboratory confirmed patients have had mild to moderate disease, which includes non-pneumonia and pneumonia cases, 13.8% have severe disease (dyspnea, respiratory frequency ≥30/minute, blood oxygen saturation ≤93%, PaO2/FiO2 ratio <300, and/or lung infiltrates >50% of the lung field within 24-48 hours) and 6.1% are critical (respiratory failure, septic shock, and/or multiple organ dysfunction/failure).

The rate of hospitalization, seriousness of symptoms, and ultimately the fatality rate depend strongly on age and, in a source of more uncertainty, perhaps on geography, points I will return to below.

What is the fatality rate, and why does it vary so much?

The Economist has a nice article exploring the wide variation in reported and estimated fatality rates, which I encourage you to read (also this means I don’t have to write it). One conclusion from that article is that we are probably misestimating fatalities due to measurement error. The total rate of infection is probably higher than is being reported, and the absolute number of fatalities is probably higher than generally understood. To this miscalculation I would add an additional layer of obfuscation, which I happened upon in my earlier work on SARS and the flu.

It turns out that we are probably significantly undercounting deaths due to influenza. This hypothesis is driven by a set of observations of anticorrelations between flu vaccination and deaths ascribed to stroke, myocardial infarction (“heart attack”), and “sudden cardiac death”, where the latter is the largest cause of “natural” death in the United States. Influenza immunization reduces the rate of those causes of death by 50-75%. The authors conclude that the actual number of people who die from influenza infections could be 4X-2.5-5X higher than the oft cited 20,000-40,000.

How could the standard estimate be so far off? Consider these two situations: First, if a patient is at the doctor or in the hospital due to symptoms of the flu, they are likely to undergo a test to rule in, or out, the flu. But if a patient comes into the ER in distress and then passes away, or if they die before getting to the hospital, then that molecular diagnostic is much less likely to be used. And if the patient is elderly and already suffering from an obvious likely cause of death, for example congestive heart failure, kidney failure, or cancer, then that is likely to be what goes on the death certificate. Consequently, particularly among older people with obvious preexisting conditions, case fatality rate for influenza is likely to be underestimated, and that is for a pathogen that is relatively well understood for which there is unlikely to be a shortage of diagnostic kits.

Having set that stage, it is no leap at all to hypothesize that the fatality rate for COVID-19 is likely to be significantly underestimated. And then if you add in insufficient testing, and thus insufficient diagnostics, as I explore below, it seems likely that many fatalities caused by COVID-19 will be attributed to something else, particularly among the elderly. The disease is already quite serious among those diagnosed who are older than 70. I expect that the final toll will be greater in communities that do not get the disease under control.

Fatality rate in China as reported by China CDC.

Fatality rate in China as reported by China CDC.

How is COVID-19 diagnosed?

For most of history, medical diagnoses have been determined by comparing patient symptoms (again, these are human-observable impacts on a patent, usually constituting natural language nouns and adjectives) with lists that doctors together agree define a particular condition. Recently, this qualitative methodology has been slowly amended with quantitative measures as they have become available: e.g., pulse, blood pressure, EEG and EKG, blood oxygen content, “five part diff” (which quantifies different kinds of blood cells), CT, MRI, blood sugar levels, liver enzyme activity, lung and heart pumping volume, viral load, and now DNA and RNA sequencing of tissues and pathogens. These latter tools have become particularly important in genetically tracking the spread of #SARS-CoV-2, because by following the sequence around the world you can sort out at the individual case level where it came from. And then simply being able to specifically detect viral RNA to provide a diagnosis is important because COVID-19 symptoms (other than fatality rate) are quite similar to that of the seasonal flu. Beyond differentiating COVID-19 from “influenza like illness”, new tools are being brought to bear that enable near real time quantification of viral RNA, which enables estimating viral load (number of viruses per sample volume), and which in turn facilitates 1) understanding how the disease progresses and then 2) how infectious patients are over time. These molecular assays are the result of decades of technology improvement, which has resulted in highly automated systems that take in raw clinical samples, process them, and deliver results electronically. At least in those labs that can afford such devices. Beyond these achievements, novel diagnostic methods based on the relatively recent development of CRISPR as a tool are already in the queue to be approved for use amidst the current pandemic. The pandemic is serving as a shock to the system to move diagnostic technology faster. We are watching in real time a momentous transition in the history of medicine, which is giving us a glimpse of the future. How are all these tools being applied today?

(Note: My original intention with this post was to look at the error rates of all the steps for each diagnostic method. I will explain why I think this is important, but other matters are more pressing at present, so the detailed error analysis will get short shrift for now.)

Recapitulating an explanation of relevant diagnostics from Part 1 of this series (with a slight change in organization):

There are three primary means of diagnosis:

1. The first is by display of symptoms, which can span a long list of cold-like runny nose, fever, sore throat, upper respiratory features, to much less pleasant, and in some cases deadly, lower respiratory impairment. (I recently heard an expert on the virus say that there are two primary ways that SARS-like viruses can kill you: “Either your lungs fill up with fluid, limiting your access to oxygen, and you drown, or all the epithelial cells in your lungs slough off, limiting your access to oxygen, and you suffocate.” Secondary infections are also more lethal for people experiencing COVID-19 symptoms.)

2. The second method of diagnosis is imaging of lungs, which includes x-ray and CT scans; SARS-CoV-2 causes particular pathologies in the lungs that can be identified on images and that distinguish it from other respiratory viruses.

3. Thirdly, the virus can be diagnosed via two molecular assays, the first of which uses antibodies to directly look for viral proteins in tissue or fluid samples, while the other looks for whether genetic material is present; sophisticated versions can quantify how many copies of viral RNA are present in a sample.

Imaging of lungs via x-ray and CT scan appears to be an excellent means to diagnose COVID-19 due to a distinct set of morphological features that appear throughout infected tissue, though those features also appear to change during the course of the disease. This study also examined diagnosis via PCR assays, and found a surprisingly high rate of false negatives. It is not clear from the text whether all patients had two independent swabs and accompanying tests, so either 10 or 12 total tests were done. If 10 were done, there are two clear false negatives, for a 20% failure rate; if 12 were done, there are up to four false negatives, for a 33% failure rate. The authors observe that “the false negative rate of oropharyngeal swabs seems high.” Note that this study directly compares the molecular assay with imaging, and the swab/PCR combo definitely comes up short. This is important because for us to definitively diagnose even the number of serious cases, let alone start sampling the larger population to track and try to get ahead of the outbreak, imaging is low throughput and expensive; we need rapid, accurate molecular assays. We need to have confidence in testing.

How does “testing” work? First, testing is not some science fiction process that involves pointing a semi-magical instrument like a Tricorder at a patient and instantly getting a diagnosis. In reality, testing involves multiple process steps implemented by humans — humans who sometimes are inadequately trained or who make mistakes. And then each of those process steps has an associated error or failure rate. You almost never hear about the rate of mistakes, errors, or failures in reporting on “testing”, and that is a problem.

Let’s take the testing process in order. For sample collection the CDC Recommendations include nasopharyngeal and oropharyngeal (i.e., nose and throat) swabs. Here is the Wikipedia page on RT-PCR, which is a pretty good place to start if you are new to these concepts.

The Seattle Flu Study and the UW Virology COVID-19 program often rely on home sample collection from nasal and throat swabs. My initial concern about this testing method was motivated in part by the fact that it was quite difficult to develop a swab-PCR for SARS-CoV that delivered consistent results, where part of the difficulty was simply in collecting a good patient sample. I have a nagging fear that not everyone who is collecting these samples today is adequately trained to get a good result, or that they are tested to ensure they are good at this skill. The number of sample takers has clearly expanded significantly around the world in the last couple of weeks, with more expansion to come. So I leave this topic with a question: is there a clinical study that examines the success rate sample collection by people who are not trained to do this every day?

On to the assays themselves: I am primarily concerned at the moment with the error bars on the detection assays. The RT-PCR assay data in China are not reported with errors (or even variance, which would be an improvement). Imaging is claimed to be 90-95% accurate (against what standard is unclear), and the molecular assays worse than that by some amount. Anecdotal reports are that they have only been 50-70% accurate, with assertions of as low as 10% in some cases. This suggests that, in addition to large probable variation in the detectable viral load, and possible quality variations in the kits themselves, human sample handling and lab error is quite likely the dominant factor in accuracy. There was a report of an automated high throughput testing lab getting set up in a hurry in Wuhan a couple of weeks ago, which might be great if the reagents quality is sorted, but I haven’t seen any reports of whether that worked out. So the idea that the “confirmed” case counts are representative of reality even in hospitals or care facilities is tenuous at best. South Korea has certainly done a better job of adequate testing, but even there questions remain about the accuracy of the testing, as reported by the Financial Times:

Hong Ki-ho, a doctor at Seoul Medical Centre, believed the accuracy of the country’s coronavirus tests was “99 per cent — the highest in the world”. He pointed to the rapid commercial development and deployment of new test kits enabled by a fast-tracked regulatory process. “We have allowed test kits based on WHO protocols and never followed China’s test methods,” Dr Hong said.

However, Choi Jae-wook, a medical professor of preventive medicine at Korea University, remained “worried”. “Many of the kits used at the beginning stage of the outbreak were the same as those in China where the accuracy was questioned . . . We have been hesitating to voice our concern because this could worry the public even more,” Mr Choi said.

At some point (hopefully soon) we will see antibody-based tests being deployed that will enable serology studies of who has been previously infected. The US CDC is developing these serologic tests now, and we should all hope the results are better than the initial round of CDC-produced PCR tests. We may also be fortunate and find that these assays could be useful for diagnosis, as lateral flow assays (like pregnancy tests) can be much faster than PCR assays. Eventually something will work, because this antibody detection is tried and true technology.

To sum up: I had been quite concerned about reports of problems (high error rates) with the PCR assay in China and in South Korea. Fortunately, it appears that more recent PCR data is more trustworthy (as I will discuss below), and that automated infrastructure being deployed in the US and Europe may improve matters further. The automated testing instruments being rolled out in the US should — should — have lower error rates and higher accuracy. I still worry about the error rate on the sample collection. However, detection of the virus may be facilitated because the upper respiratory viral load for SARS-CoV-2 appears to be much higher than for SARS-CoV, a finding with further implications that I will explore below.

How is the virus spread?

(Note: the reporting on asymptomatic spread has changed a great deal just in the last 24 hours. Not all of what appears below is updated to reflect this yet.)

The standard line, if there can be one at this point, has been that the virus is spread by close contact with symptomatic patients. This view is bolstered by claims in the WHO Joint Mission report: “Asymptomatic infection has been reported, but the majority of the relatively rare cases who are asymptomatic on the date of identification/report went on to develop disease. The proportion of truly asymptomatic infections is unclear but appears to be relatively rare and does not appear to be a major driver of transmission.”(p.12) These claims are not consistent with a growing body of clinical observations. Pinning down the rate of asymptomatic, or presymptomatic, infections is important for understanding how the disease spreads. Combining that rate with evidence that patients are infectious while asymptomatic, or presymptomatic, is critical for planning response and for understanding the impact of social distancing.

Two sentences in the Science news piece describing the Joint Mission report undermine all the quantitative claims about impact and control: “A critical unknown is how many mild or asymptomatic cases occur. If large numbers of infections are below the radar, that complicates attempts to isolate infectious people and slow spread of the virus.” Nature picked up this question earlier this week: “How much is coronavirus spreading under the radar?” The answer: probably quite a lot.

A study of cases apparently contracted in a shopping mall in Wenzhou concluded that the most likely explanation for the pattern of spread is “that indirect transmission of the causative virus occurred, perhaps resulting from virus contamination of common objects, virus aerosolization in a confined space, or spread from asymptomatic infected persons.”

Another recent paper in which the authors built an epidemiological transmission model all the documented cases in Wuhan found that, at best, only 41% of the total infection were “ascertained” by diagnosis, while the most likely acertainment rate was a mere 21%. That is, the model best fits the documented case statistics when 79% of the total infections were unaccounted for by direct diagnosis.

Finally, a recent study of patients early after infection clearly shows “that COVID-19 can often present as a common cold-like illness. SARS-CoV-2 can actively replicate in the upper respiratory tract, and is shed for a prolonged time after symptoms end, including in stool.” The comprehensive virological study demonstrates “active [infectious] virus replication in upper respiratory tract tissues”, which leads to a hypothesis that people can present with cold-like symptoms and be infectious. I will quote more extensively from the abstract, as this bit is crucially important:

Pharyngeal virus shedding was very high during the first week of symptoms (peak at 7.11 X 10^8 RNA copies per throat swab, day 4). Infectious virus was readily isolated from throat- and lung-derived samples, but not from stool samples in spite of high virus RNA concentration. Blood and urine never yielded virus. Active replication in the throat was confirmed by viral replicative RNA intermediates in throat samples. Sequence-distinct virus populations were consistently detected in throat- and lung samples of one same patient. Shedding of viral RNA from sputum outlasted the end of symptoms. Seroconversion occurred after 6-12 days, but was not followed by a rapid decline of viral loads.

That is, you can be sick for a week with minimal- to mild symptoms, shedding infectious virus, before antibodies to the virus are detectable. (This study also found that “Diagnostic testing suggests that simple throat swabs will provide sufficient sensitivity at this stage of infection. This is in stark contrast to SARS.” Thus my comments above about reduced concern about sampling methodology.)

So the virus is easy to detect because it is plentiful in the throat, which unfortunately also means that it is easy to spread. And then even after you begin to have a specific immune response, detectable as the presence of antibodies in blood, viral loads stay high.

The authors conclude, rather dryly, with an observation that “These findings suggest adjustments of current case definitions and re-evaluation of the prospects of outbreak containment.” Indeed.

One last observation from this paper is eye opening, and needs much more study: “Striking additional evidence for independent replication in the throat is provided by sequence findings in one patient who consistently showed a distinct virus in her throat as opposed to the lung.” I am not sure we have seen something like this before. Given the high rate of recombination between strains in this family of betacoronaviruses (see Part 1), I want to flag the infection of different tissues by different strains as a possibly worrying route to more viral innovation, that is, evolution.

STAT+ News summarizes the above study as follows:

The researchers found very high levels of virus emitted from the throat of patients from the earliest point in their illness —when people are generally still going about their daily routines. Viral shedding dropped after day 5 in all but two of the patients, who had more serious illness. The two, who developed early signs of pneumonia, continued to shed high levels of virus from the throat until about day 10 or 11.

This pattern of virus shedding is a marked departure from what was seen with the SARS coronavirus, which ignited an outbreak in 2002-2003. With that disease, peak shedding of virus occurred later, when the virus had moved into the deep lungs.

Shedding from the upper airways early in infection makes for a virus that is much harder to contain. The scientists said at peak shedding, people with Covid-19 are emitting more than 1,000 times more virus than was emitted during peak shedding of SARS infection, a fact that likely explains the rapid spread of the virus. 

Yesterday, CNN joined the chorus of reporting on the role asymptomatic spread. It is a nice summary, and makes clear that not only is “presymptomatic transmission commonplace”, it is a demonstrably significant driver of infection. Michael Osterholm, director of the Center for Infectious Disease Research (CIDRAP) and Policy at the University of Minnesota, and always ready with a good quote, was given the opportunity to put the nail in the coffin on the denial of asymptomatic spread:

"At the very beginning of the outbreak, we had many questions about how transmission of this virus occurred. And unfortunately, we saw a number of people taking very firm stances about it was happening this way or it wasn't happening this way. And as we have continued to learn how transmission occurs with this outbreak, it is clear that many of those early statements were not correct," he said. 

"This is time for straight talk," he said. "This is time to tell the public what we know and don't know."

There is one final piece of the puzzle that we need to examine to get a better understanding of how the virus is spreading. You may have read about characterizing the infection rate by the basic reproduction number, R0, which is a statistical measure that captures the average dynamics of transmission. There is another metric the “secondary attack rate”, or SAR, which is a measurement of the rate of transmission in specific cases in which a transmission event is known to have occurred. The Joint Mission report cites an SAR in the range of 5-10% in family settings, which is already concerning. But there is another study (that, to be fair, came out after the Joint Mission report) of nine instances in Wuhan that calculates the secondary attack rate in specific community settings is 35%. That is, assuming one initially infected person per room attended an event in which spread is known to have happened, on average 35% of those present were infected. In my mind, this is the primary justification for limiting social contacts — this virus appears to spread extremely well when people are in enclosed spaces together for a couple of hours, possibly handling and sharing food.

Many missing pieces must be filled in to understand whether the high reported SAR above is representative globally. For instance, what where the environmental conditions (humidity, temperature) and ventilation like at those events? Was the source of the virus a food handler, or otherwise a focus of attention and close contact, or were they just another person in the room? Social distancing and eliminating public events was clearly important in disrupting the initial outbreak in Wuhan, but without more specific information about how community spread occurs we are just hanging on, hoping old fashioned public health measures will slow the thing down until countermeasures (drugs and vaccines) are rolled out. And when the social control measures are lifted, the whole thing could blow up again. Here is Osterholm again, from the Science news article covering the Joint Mission report:

“There’s also uncertainty about what the virus, dubbed SARS-CoV-2, will do in China after the country inevitably lifts some of its strictest control measures and restarts its economy. COVID-19 cases may well increase again.”

“There’s no question they suppressed the outbreak,” says Mike Osterholm, head of the Center for Infectious Disease Research and Policy at the University of Minnesota, Twin Cities. “That’s like suppressing a forest fire, but not putting it out. It’ll come roaring right back.”

What is the age distribution of infections?

The short answer here is that everyone can get infected. The severity of one’s response appears to depend strongly on age, as does the final outcome of the disease (the “endpoint”, as it is somewhat ominously referred to). Here we run smack into another measurement problem, because in order to truly understand who is infected, we would need to be testing broadly across the population, including a generous sample of those who are not displaying symptoms. Because only South Korea has been sampling so widely, only South Korea appears to have a data set that gives some sense of the age distribution of infections across the whole population. Beyond the sampling problem, I found it difficult to find this sort of demographic data published anywhere on the web.

Below is the only age data I have been able to come up with, admirably cobbled together by Andreas Backhaus from screenshots of data out of South Korea and Italy.

Why would you care about this? Because, in many countries, policy makers have not yet closed schools, restaurants, or pubs that younger and healthier members of the population tend to frequent. If this population is either asymptomatic or mildly symptomatic, but still infectious — as indicated above — then they are almost certainly spreading virus not only amongst themselves, but also to members of their families who may be more likely to experience severe symptoms. Moreover, I am led to speculate by the different course of disease in different communities that the structure of social contacts could be playing a significant role in the spread of the virus. Countries that have a relatively high rate of multi-generational households, in which elderly relatives live under the same roof as young people, could be in for a rough ride with COVID-19. If young people are out in the community, exposed to the virus, then their elderly relatives at home have a much higher chance of contracting the virus. Here is the distribution of multigenerational households by region, according to the UN:

Screen Shot 2020-03-15 at 8.39.46 PM.png

The end result of all this is that we — humanity at large, and in particular North America and Europe — need to do a much better job of containment in our own communities in order to reduce morbidity and mortality caused by SARS-CoV-2.

How did we get off track with our response?

It is important to understand how the WHO got the conclusion about the modes of infection wrong. By communicating so clearly that they believed there was a minimal role for asymptomatic spread, the WHO sent a mixed message that, while extreme social distancing works, perhaps it was not so necessary. Some policy makers clearly latched onto the idea that the disease only spreads from very sick people, and that if you aren’t sick then you should continue to head out to the local pub and contribute to the economy. The US CDC seems to have been slow to understand the error (see the CNN story cited above), and the White House just ran with the version of events that seemed like it would be politically most favorable, and least inconvenient economically.

The Joint Mission based the assertion that asymptomatic and presymptomatic infection is “rare” on a study in Guangdong Province. Here is Science again: “To get at this question, the report notes that so-called fever clinics in Guangdong province screened approximately 320,000 people for COVID-19 and only found 0.14% of them to be positive.” Caitlin Rivers, from Johns Hopkins, hit the nail on the head by observing that “Guangdong province was not a heavily affected area, so it is not clear whether [results from there hold] in Hubei province, which was the hardest hit.”

I am quite concerned (and, frankly, disappointed) that the WHO team took at face value that the large scale screening effort in Guangdong that found a very low “asymptomatic count” is somehow representative of anywhere else. Guangdong has a ~50X lower “case count” than Hubei, and a ~400X lower fatality rate, according to the Johns Hopkins Dashboard on 15 March — the disparity was probably even larger when the study was performed. The course of the disease was clearly quite different in Guangdong than in Hubei.

Travel restrictions and social distancing measures appear to have had a significant impact on spread from Hubei to Guangdong, and within Guangdong, which means that we can’t really know how many infected individuals were in Guangdong, or how many of those were really out in the community. A recent study computed the probability of spread from Wuhan to other cities given both population of the city and number of inbound trips from Wuhan; for Guangzhou, in Guangdong, the number of infections was anomalously low given its very large population. That is, compared with other transmission chains in China, Guangdong wound up with many fewer cases that you would expect, and the case count there is therefore not representative. Consequently, the detected infection rate in Guangdong is not a useful metric for understanding anything but Guangdong. The number relevant for epidemiological modeling is the rate of asymptomatic infection in the *absence* of control measures, because that tells us how the virus behaves without draconian social distancing, and any return to normalcy in the world will not have that sort of control measure in place.

Now, if I am being charitable, it may have been that the only large scale screening data set available to the Joint Mission at the time was from Guangdong. The team needed to publish a report, and saying something about asymptomatic transmission was critically important to telling a comprehensive story, so perhaps they went with the only data they had. But the conclusions smelled wrong to me as soon as they were announced. I wrote as much to several reporters and on Twitter, observing that the WHO report was problematic because it assumed the official case counts approximated the actual number of infections, but I couldn’t put my finger on exactly what bugged me until I could put together the rest of the story above. Nevertheless, the WHO has a lot of smart people working for it; why did the organization so quickly embrace and promulgate a narrative that was so obviously problematic to anyone who knows about epidemiology and statistics?

What went wrong at the WHO?

There are some very strong opinions out there regarding the relationship between China and the WHO, and how that relationship impacts the decisions made by Director-General Dr. Tedros Adhanom. I have not met Dr. Tedros and only know what I read about him. However, I do have personal experience with several individuals now higher up in the chain of command for the WHO coronavirus response, and I have no confidence in them whatsoever. Here is my backstory.

I have wandered around the edges of the WHO for quite a while, and have spent most of my time in Geneva at the UN proper and working with the Biological Weapons Convention Implementation Support Unit. Then, several years ago, I was asked to serve on a committee at WHO HQ. I wasn’t particularly enthusiastic about saying yes, but several current and former high ranking US officials convinced me it was for the common good. So I went. It doesn’t matter which committee at the moment. What does matter is that, when it came time to write the committee report, I found that the first draft embraced a political narrative that was entirely counter to my understanding of the relevant facts, science, and history. I lodged my objections to the draft in a long minority report that pointed out the specific ways in which the text diverged from reality. And then something interesting happened.

I received a letter informing me that my appointment to the committee had been a mistake, and that I was actually supposed to be just a technical advisor. Now, the invitation said “member”, and all the documents that I signed beforehand said “member”, with particular rights and responsibilities, including a say in the text of the report. I inquired with the various officials who had encouraged me to serve, as well as with a diplomat or two, and the unanimous opinion was that I had been retroactively demoted so that the report could be written without addressing my concerns. All of those very experienced people were quite surprised by this turn of events. In other words, someone in the WHO went to surprising lengths to try to ensure that the report reflected a particular political perspective rather than facts, history, and science. Why? I do not know what the political calculations were. But I do know this: the administrative leadership in charge of the WHO committee I served on is now high up in the chain of command for the coronavirus response.

Coda: as it turns out, the final report hewed closely to reality as I understood it, and embraced most of the points I wanted it to make. I infer, but do not know for certain, that one or more other members of the committee — who presumably could not be shunted aside so easily, and who presumably had far more political heft than I do — picked up and implemented my recommended changes. So alls well that ends well? But the episode definitely contributed to my education (and cynicism) about how the WHO balances politics and science, and I am ill disposed to trust the organization. Posting my account may mean that I am not invited to hang out at the WHO again. This is just fine.

How much bearing does my experience have on what is happening now in the WHO coronavirus response? I don’t know. You have to make up your own mind about this. But having seen the sausage being made, I am all too aware that the organization can be steered by political considerations. And that definitely increases uncertainty about what is happening on the ground. I won’t be writing or saying anything more specific about that particular episode at this time.

Uncertainty in the Time of COVID-19, Part 1

Part 1: Introduction

Times being what they are, in which challenging events abound and good information is hard to come by, I am delving back into writing about infectious disease (ID). While I’ve not been posting here about the intersection of ID, preparedness, and biosecurity, I have continued to work on these problems as a consultant for corporations, the US government, and the WHO. More on that in a bit, because my experience on the ground at the WHO definitely colors my perception of what the organization has said about events in China.

These posts will primarily be a summary of what we do, and do not, know about the current outbreak of the disease named COVID-19, and its causative agent, a coronavirus known officially as SARS-CoV-2 (for “SARS coronavirus-2”). I am interested in 1) what the ground truth is as best we can get to it in the form of data (with error bars), and I am interested in 2) claims that are made that are not supported by that data. You will have read definitive claims that COVID-19 will be no worse than a bad flu, and you will have read definitive claims that the sheer number of severe cases will overwhelm healthcare systems around the world, potentially leading to shocking numbers of fatalities. The problem with any definitive claim at this point is that we still have insufficient concrete information about the basic molecular biology of the virus and the etiology of this disease to have a good idea of what is going to happen. Our primary disadvantage right now is that uncertainty, because uncertainty necessarily complicates both our understanding of the present and our planning for the future.

Good sources of information: If you want to track raw numbers and geographical distribution, the Johns Hopkins Coronavirus COVID-19 Global Cases dashboard is a good place to start, with the caveat that “cases” here means those officially reported by national governments, which data are not necessarily representative of what is happening out in the real world. The ongoing coverage at The Atlantic about testing (here, and here, for starters) is an excellent place to read up on the shortcomings of the current US approach, as well as to develop perspective on what has happened as a result of comprehensive testing in South Korea. Our World In Data has a nice page, updated often, that provides a list of basic facts about the virus and its spread (again with a caveat about “case count”). Nextstrain is a great tool to visualize how the various mutations of SARS-CoV-2 are moving around the world, and changing as they go. That we can sequence the virus so quickly is a welcome improvement in our response, as it allows sorting out how infection is spreading from one person to another, and one country to another. This is a huge advance in human capability to deal with pathogen outbreaks. However, and unfortunately, this is still retrospective information, and means we are chasing the virus, not getting ahead of it.

How did we get here?

My 2006 post, “Nature is Full of Surprises, and We Are Totally Unprepared”, summarizes some of my early work with Bio-era on pandemic preparedness and response planning, which involved looking back at SARS and various influenza epidemics in order to understand future events. One of the immediate observations you make from even a cursory analysis of outbreaks is that pathogen surveillance in both animals and humans needs to be an ongoing priority. Bio-era concluded that humanity would continue to be surprised by zoonotic events in the absence of a concerted effort to build up global surveillance capacity. We recommended to several governments that they address this gap by aggressively rolling out sampling and sequencing of wildlife pathogens. And then not much happened to develop and real surveillance capacity until — guess what — we were surprised again by the 2009 H1N1 (aka Mexican, aka Swine) flu outbreak, which nobody saw coming because nobody was looking in the right place.

In the interval since, particularly in the wake of the “West Africa” Ebola outbreak that started in 2013, global ID surveillance has improved. The following years also saw lots of news about the rise of the Zika virus and the resurgence of Dengue, about which I am certain we have not heard the last. In the US, epidemic planning and response was finally taken seriously at the highest levels of power, and a Global Health and Security team was established within the National Security Council. That office operated until 2018, when the current White House defunded the NSC capability as well as a parallel effort at DHS (read this Foreign Policy article by Laurie Garrett for perspective: “Trump Has Sabotaged America’s Coronavirus Response”). I am unable to be adequately politic about these events just yet, even when swearing like a sailor, so I will mostly leave them aside for now. I will try to write something about US government attitudes about preparing to deal with lethal infectious diseases under separate cover; in the meantime you might get some sense of my thinking from my memorial to virologist Mark Buller.

Surprise? Again?

Outside the US government, surveillance work has continued. The EcoHealth Alliance has been on the ground in China for many years now, sequencing animal viruses, particularly from bats, in the hopes of getting a jump on the next zoonosis. I was fortunate to work with several of the founders of the EcoHealth Alliance, Drs. Peter Daszak and Billy Karesh, during my time with Bio-era. They are good blokes. Colorful, to be sure — which you sort of have to be to get out of bed with the intention of chasing viruses into bat caves and jumping out of helicopters to take blood samples from large predators. The EcoHealth programs have catalogued a great many potential zoonotic viruses over the years, including several that are close relatives of both SARS-CoV (the causative agent of SARS) and SARS-CoV-2. And then there is Ralph Baric, at UNC, who with colleagues in China has published multiple papers over the years pointing to the existence of a cluster of SARS-like viruses circulating in animals in Hubei. See, in particular, “A SARS-like cluster of circulating bat coronaviruses shows potential for human emergence”, which called out in 2015 a worrisome group of viruses to which SARS-CoV-2 belongs. This work almost certainly could not have picked out that specific virus before it jumped to humans, because that would require substantially more field surveillance and more dedicated laboratory testing than has been possible with existing funding. But Baric and colleagues gave a clear heads up that something was brewing. And yet we were “surprised”, again. (Post publication note: For more on what has so far been learned about the origin of the virus, see this absolutely fantastic article in Scientific American that came out today: How China’s “Bat Woman” Hunted Down Viruses from SARS to the New Coronavirus, by Jane Qiu. I will come back to it in later installments of this series. It is really, really good.)

Not only were we warned, we have considerable historical experience that (wildlife consumption + coronavirus + humans) leads to zoonosis, or a disease that jumps from animals to humans. This particular virus still caught us unawares; it snuck up on us because we need to do a much better job of understanding how viruses jump from animal hosts to humans. Unless we start paying closer attention, it won’t be the last time. The pace of zoonotic events among viruses related to SARS-CoV has accelerated over the last 25 years, as I will explore in a forthcoming post. The primary reason for this acceleration, according to the wildlife veterinarians and virus hunters I talk to, is that humans continue to both encroach on natural habitats and to bring animals from those habitats home to serve for dinner. So in addition to better surveillance, humans could reduce the chance of zoonosis by eating fewer wild animals. Either way, the lesson of being surprised by SARS-CoV-2 is that we must work much harder to stay ahead of nature.

Why is the US, in particular, so unprepared to deal with this virus?

The US government has a long history of giving biological threats and health security inadequate respect. Yes, there have always been individuals and small groups inside various agencies and departments who worked hard to increase our preparedness and response efforts. But people at the top have never fully grasped what is at stake and what needs to be done.

Particularly alarming, we have recently experienced a unilateral disarming in the face of known and obvious threats. See the Laurie Garrett article cited above for details. As reported by The New York Times,

“Mr. Trump had no explanation for why his White House shut down the Directorate for Global Health Security and Biodefense established at the National Security Council in 2016 by President Barack Obama after the 2014 Ebola outbreak.”

Yet this is more complicated than is apparent or is described in the reporting, as I commented on Twitter earlier this week. National security policy in the US has been dominated for many decades by people who grew up intellectually in the Cold War, or were taught by people who fought the Cold War. Cold War security was about nation states and, most importantly, nuclear weapons. When the Iron Curtain fell, the concern about large nations (i.e., the USSR) slipped away for a while, eventually to be replaced by small states, terrorism, and WMDs. But WMD policy, which in principle includes chemical and biological threats, has continued to be dominated by the nuclear security crowd. The argument is always that nuclear (and radiological) weapons are more of a threat and can cause more damage than a mere microbe, whether natural or artificial. And then there is the spending associated with countering the more kinetic threats: the big, shiny, splody objects get all the attention. So biosecurity and pandemic preparedness and response, which often are lumped together as "health security", get short shrift because the people setting priorities have other priorities. This has been a problem for both Democrat and Republican administrations, and demonstrates a history of bipartisan blindness.

Then, after decades of effort, and an increasing number of emergent microbial/health threats, finally a position and office were created within the National Security Council. While far from a panacea, because the USG needs to do much more than have policy in place, this was progress.

And then a new Administration came in, which not only has different overall security priorities but also is dominated by old school security people who are focussed on the intersection of a small number of nation states and nuclear weapons. John Bolton, in particular, is a hardline neocon whose intellectual roots are in Cold War security policy; so he is focussed on nukes. His ascendence at the NSC was coincident not just with the NSC preparedness office being shut down, but also a parallel DHS office responsible for implementing policy. And then, beyond the specific mania driving a focus on nation states and nukes as the primary threats to US national security, there is the oft reported war on expertise in the current exec branch and EOP. Add it all up: The USG is now severely understaffed for the current crisis.

Even the knowledgeable professionals still serving in the government have been hamstrung by bad policy in their ability to organize a response. To be blunt: patients are dying because the FDA & CDC could not get out of the way or — imagine it — help in accelerating the availability of testing at a critical time in a crisis. There will be a reckoning. And then public health in the US will need to be rebuilt, and earn trust again. There is a long road ahead. But first we have to deal with SARS-CoV-2.

Who is this beastie, SARS-CoV-2?

Just to get the introductions out of the way, the new virus is classified within order Nidovirales, family Coronaviridae, subfamily Orthocoronaviridae. You may also see it referred to as a betacoronavirus. To give you some sense of the diversity of coronaviruses, here is a nice, clean visual representation of their phylogenetic relationships. It contains names of many familiar human pathogens. If you are wondering why we don’t have a better understanding of this family of viruses given their obvious importance to human health and to economic and physical security, good for you — you should wonder about this. For the cost of a single marginally functional F-35, let alone a white elephant new aircraft carrier, we could fund viral surveillance and basic molecular biology for all of these pathogens for years.

The diversity of pathogenic coronaviruses. Source: Xyzology.

The diversity of pathogenic coronaviruses. Source: Xyzology.

Betacoronaviruses (BCVs) are RNA viruses that are surrounded by a lipid membrane. The membrane is damaged by soap and by ethyl or isopropyl alcohol; without the membrane the virus falls apart. BCVs differ from influenza viruses in both their genome structure and in the way they evolve. Influenza viruses have segmented genomes — the genes are, in effect, organized into chromosomes — and the virus can evolve either through swapping chromosomes with other flu strains or through mutations that happen when the viral polymerase, which copies RNA, makes a mistake. The influenza polymerase makes lots of mistakes, which means that many different sequences are produced during replication. This is a primary driver of the evolution of influenza viruses, and largely explains why new flu strains show up every year. While the core of the copying machinery in Betacoronaviruses is similar to that of influenza viruses, it also contains an additional component called Nsp-14 that corrects copying mistakes. Disable or remove Nsp-14 and you get influenza-like mutation rates in Betacoronaviruses. (For some reason I find that observation particularly fascinating, though I can’t really explain why.)

There is another important feature of the BCV polymerase in that it facilitates recombination between RNA strands that happen to be floating around nearby. This means that if a host cell happens to be infected with more than one BCV strain at the same time, you can get a relatively high rate of new genomes being assembled out of all the parts floating around. This is one reason why BCV genome sequences can look like they are pasted together from strains that infect different species — they are often assembled exactly that way at the molecular level.

Before digging into the uncertainties around this virus and what is happening in the world, we need to understand how it is detected and diagnosed. There are three primary means of diagnosis. The first is by display of symptoms, which can span a long list of cold-like runny nose, fever, sore throat, upper respiratory features, to much less pleasant, and in some cases deadly, lower respiratory impairment. (I recently heard an expert on the virus say that there are two primary ways that SARS-like viruses can kill you: “Either your lungs fill up with fluid, limiting your access to oxygen, and you drown, or all the epithelial cells in your lungs slough off, limiting your access to oxygen, and you suffocate.” Secondary infections are also more lethal for people experiencing COVID-19 symptoms.) The second method of diagnosis is imaging of lungs, which includes x-ray and CT scans; SARS-CoV-2 causes particular pathologies in the lungs that can be identified on images and that distinguish it from other respiratory viruses. Finally, the virus can be diagnosed via two molecular assays, the first of which uses antibodies to directly look for viral proteins in tissue or fluid samples, while the other looks for whether genetic material is present; sophisticated versions can quantify how many copies of viral RNA are present in a sample.

Each of these diagnostic methods is usually described as being “accurate” or “sensitive” to some degree, when instead they should be described as having some error rate, a rate than might be dependent on when or where the method was applied, or might vary with who was applying it. And every time you read how “accurate” or “sensitive” a method is, you should ask: compared to what? And this is where we get into uncertainty.

Part 2 of this series will dig into specific sources of uncertainty spanning measurement and diagnosis to recommendations.

A memorial to Mark Buller, PhD, and our response to the propaganda film "Demon in the Freezer".

Earlier this year my friend and colleague Mark Buller passed away. Mark was a noted virologist and a professor at Saint Louis University. He was struck by a car while riding his bicycle home from the lab, and died from his injuries. Here is Mark's obituary as published by the university.

In 2014 and 2015, Mark and I served as advisors to a WHO scientific working group on synthetic biology and the variola virus (the causative agent of smallpox). In 2016, we wrote the following, previously un-published, response to an "Op-Doc" that appeared in the New York Times. In a forthcoming post I will have more to say about both my experience with the WHO and my thoughts on the recent publication of a synthetic horsepox genome. For now, here is the last version (circa May, 2016) of the response Mark I and wrote to the Op-Doc, published here as my own memorial to Professor Buller.


Variola virus is still needed for the development of smallpox medical countermeasures

On May 17, 2016 Errol Morris presented a short movie entitled “Demon in the Freezer” [note: quite different from the book of the same name by Richard Preston] in the Op-Docs section of the on-line New York Times. The piece purported to present both sides of the long-standing argument over what to do with the remaining laboratory stocks of variola virus, the causative agent of smallpox, which no longer circulates in the human population.

Since 1999, the World Health Organization has on numerous occasions postponed the final destruction of the two variola virus research stocks in Russia and the US in order to support public health related research, including the development of smallpox molecular diagnostics, antivirals, and vaccines.  

“Demon in the Freezer” clearly advocates for destroying the virus. The Op-Doc impugns the motivation of scientists carrying out smallpox research by asking: “If given a free hand, what might they unleash?” The narrative even suggests that some in the US government would like to pursue a nefarious policy goal of “mutually assured destruction with germs”. This portion of the movie is interlaced with irrelevant, hyperbolic images of mushroom clouds. The reality is that in 1969 the US unilaterally renounced the production, storage or use biological weapons for any reason whatsoever, including in response to a biologic attack from another country. The same cannot be said for ISIS and Al-Qaeda. In 1975 the US ratified the 1925 Geneva Protocol banning chemical and biological agents in warfare and became party to the Biological Weapons Convention that emphatically prohibits the use of biological weapons in warfare.

“Demon in the Freezer” is constructed with undeniable flair, but in the end it is a benighted 21st century video incarnation of a middling 1930's political propaganda mural. It was painted with only black and white pigments, rather than a meaningful palette of colors, and using a brush so broad that it blurred any useful detail. Ultimately, and to its discredit, the piece sought to create fear and outrage based on unsubstantiated accusations.

Maintaining live smallpox virus is necessary for ongoing development and improvement of medical countermeasures. The first-generation US smallpox vaccine was produced in domesticated animals, while the second-generation smallpox vaccine was manufactured in sterile bioreactors; both have the potential to cause serious side effects in 10-20% of the population. The third generation smallpox vaccine has an improved safety profile, and causes minimal side effects. Fourth generation vaccine candidates, based on newer, lower cost, technology, will be even safer and some are in preclinical testing. There remains a need to develop rapid field diagnostics and an additional antiviral therapy for smallpox.

Continued vigilance is necessary because it is widely assumed that numerous undeclared stocks of variola virus exist around the world in clandestine laboratories. Moreover, unsecured variola virus stocks are encountered occasionally in strain collections left behind by long-retired researchers, as demonstrated in 2014 with the discovery of 1950s vintage variola virus in a cold room at the NIH. The certain existence of unofficial stocks makes destroying the official stocks an exercise in declaring “victory” merely for political purposes rather than a substantive step towards increasing security. Unfortunately, the threat does not end with undeclared or forgotten samples.

In 2015 a WHO Scientific Working Group on Synthetic Biology and Variola Virus and Smallpox determined that a “skilled laboratory technician or undergraduate student with experience of working with viruses” would be able to generate variola virus from the widely available genomic sequence in “as little as three months”. Importantly, this Working Group concluded that “there will always be the potential to recreate variola virus and therefore the risk of smallpox happening again can never be eradicated.” Thus, the goal of a variola virus-free future, however laudable, is unattainable. This is sobering guidance on a topic that requires sober consideration.

We welcome increased discussions of the risk of infectious disease and of public health preparedness. In the US these topics have too long languished among second (or third) tier national security conversations. The 2014 West Africa Ebola outbreak and the current Congressional debate over funding to counter the Zika virus exemplifies the business-as-usual political approach of throwing half a bucket of water on the nearest burning bush while the surrounding countryside goes up in flames. Lethal infectious diseases are serious public health and global security issues and they deserve serious attention.

The variola virus has killed more humans numerically than any other single cause in history. This pathogen was produced by nature, and it would be the height of arrogance, and very foolish indeed, to assume nothing like it will ever again emerge from the bush to threaten human life and human civilization. Maintenance of variola virus stocks is needed for continued improvement of molecular diagnostics, antivirals, and vaccines. Under no circumstances should we unilaterally cripple those efforts in the face of the most deadly infectious disease ever to plague humans. This is an easy mistake to avoid.

Mark Buller, PhD, was a Professor of Molecular Microbiology & Immunology at Saint Louis University School of Medicine, who passed away on February 24, 2017. Rob Carlson, PhD, is a Principal at the engineering and strategy firm Biodesic and a Managing Director of Bioeconomy Capital.

The authors served as scientific and technical advisors to the 2015 WHO Scientific Working Group on Synthetic Biology and Variola Virus.

Guesstimating the Size of the Global Array Synthesis Market

(Updated, Aug 31, for clarity.)

After chats with a variety of interested parties over the last couple of months, I decided it would be useful to try to sort out how much DNA is synthesized annually on arrays, in part to get a better handle on what sort of capacity it represents for DNA data storage. The publicly available numbers, as usual, are terrible, which is why the title of the post contains the word "guesstimating". Here goes.

First, why is this important? As the DNA synthesis industry grows, and the number of applications expands, new markets are emerging that use that DNA in different ways. Not all that DNA is produced using the same method, and the different methods are characterized by different costs, error rates, lengths, throughput, etc. (The Wikipedia entry on Oligonucleotide Synthesis is actually fairly reasonable, if you want to read more. See also Kosuri and Church, "Large-scale de novo DNA synthesis: technologies and applications".) If we are going to understand the state of the technology, and the economy built on that technology, then we need to be careful about measuring what the technology can do and how much it costs. Once we pin down what the world looks like today, we can start trying to make sensible projections, or even predictions, about the future.

While there is just one basic chemistry used to synthesize oligonucleotides, there are two physical formats that give you two very different products. Oligos synthesized on individual columns, which might be packed into 384 (or more) well plates, can be manipulated as individual sequences. You can use those individual sequences for any number of purposes, and if you want just one sequence at a time (for PCR or hybridization probes, gene therapy, etc), this is probably how you make it. You can build genes from column oligos by combining them pairwise, or in larger numbers, until you get the size construct you want (typically of order a thousand bases, or a kilobase [kB], at which point you start manipulating the kB fragments). I am not going to dwell on gene assembly and error correction strategies here; you can Google that.

The other physical format is array synthesis, in which synthesis takes place on a solid surface consisting of up to a million different addressable features, where light or charge is used to control which sequence is grown on which feature. Typically, all the oligos are removed from the array at once, which results in a mixed pool. You might insert this pool into a longer backbone sequence to construct a library of different genes that code for slightly different protein sequences, in order to screen those proteins for the characteristics you want. Or, if you are ambitious, you might use the entire pool of array oligos to directly assemble larger constructs such as genes. Again, see Google, Codon Devices, Gen9, Twist, etc. More relevant to my purpose here, a pool of array-synthesized oligos can be used as an extremely dense information storage medium. To get a sense of when that might be a viable commercial product, we need to have an idea of the throughput of the industry, and how far away from practical implementation we might be. 

Next, to recap, last year I made a stab at estimating the size of the gene synthesis market. Much of the industry revenue data came from a Frost & Sullivan report, commissioned by Genscript for its IPO prospectus. The report put the 2014 market for synthetic genes at only $137 million, from which I concluded that the total number of bases shipped as genes that year was 4.8 billion, or a bit less than a duplex human genome. Based on my conversations with people in the industry, I conclude that most of those genes were assembled from oligos synthesized on columns, with a modest, but growing, fraction from array oligos. (See "On DNA and Transistors", and preceding posts, for commentary on the gene synthesis industry and its future.)

The Frost & Sullivan report also claims that the 2014 market for single-stranded oligonucleotides was $241 million. The Genscript IPO prospectus does not specify whether this $241 million was from both array- and column-synthesized oligos, or not. But because Genscript only makes and uses column synthesis, I suspect it referred only to that synthesis format.  At ~$0.01 per base (give or take), this gives you about 24 billion bases synthesized on columns sold in 2014. You might wind up paying as much as $0.05 to $0.10 per base, depending on your specifications, which if prevalent would pull down the total global production volume. But I will stick with $0.01 per base for now. If you add the total number of bases sold as genes and the bases sold as oligos, you get to just shy of 30 billion bases (leaving aside for the moment the fact that an unknown fraction of the genes came from oligos synthesized on arrays).

So, now, what about array synthesis? If you search the interwebs for information on the market for array synthesis, you get a mess of consulting and marketing research reports that cost between a few hundred and many thousands of dollars. I find this to be an unhelpful corpus of data and analysis, even when I have the report in hand, because most of the reports are terrible at describing sources and methods. However, as there is no other source of data, I will use a rough average of the market sizes from the abstracts of those reports to get started. Many of the reports claim that in 2016 the global market for oligo synthesis was ~$1.3 billion, and that this market will grow to $2.X billion by 2020 or so. Of the $1.3B 2016 revenues, the abstracts assert that approximately half was split evenly between "equipment and reagents". I will note here that this should already make the reader skeptical of the analyses, because who is selling ~$260M worth of synthesis "equipment"? And who is buying it? Seems fishy. But I can see ~$260M in reagents, in the form of various columns, reagents, and purification kit. This trade, after all, is what keeps outfits like Glenn Research and Trilink in business.

Forging ahead through swampy, uncertain data, that leaves us with ~$650M in raw oligos. Should we say this is inclusive or exclusive of the $241M figure from Frost & Sullivan? I am going to split the difference and call it $500M, since we are already well into hand waving territory by now, anyway. How many bases does this $500M buy?

Array oligos are a lot cheaper than column oligos. Kosuri and Church write that "oligos produced from microarrays are 2–4 orders of magnitude cheaper than column-based oligos, with costs ranging from $0.00001–0.001 per nucleotide, depending on length, scale and platform." Here we stumble a bit, because cost is not the same thing as price. As a consumer, or as someone interested in understanding how actually acquiring a product affects project development, I care about price. Without knowing a lot more about how this cost range is related to price, and the distribution of prices paid to acquire array oligos, it is hard to know what to do with the "cost" range. The simple average cost would be $0.001 per base, but I also happen to know that you can get oligos en masse for less than that. But I do not know what the true average price is. For the sake of expediency, I will call it $0.0001 per base for this exercise.

Combining the revenue estimate and the price gives us about 5E12 bases per year. From there, assuming roughly 100-mer oligos, you get to 5E10 difference sequences. And adding in the number of features per array (between 100,000 and 1M), you get as many as 500,000 arrays run per year, or about 1370 per day. (It is not obvious that you should think of this as 1370 instruments running globally, and after seeing the Agilent oligo synthesis operation a few years ago, I suggest that you not do that.) If the true average price is closer to $0.00001 per base, then you can bump up the preceding numbers by an order of magnitude. But, to be conservative, I won't do that here. Also note that the ~30 billion bases synthesized on columns annually are not even a rounding error on the 5E12 synthesized on arrays.

Aside: None of these calculations delve into the mass (or the number of copies) per synthesized sequence. In principle, of course, you only need one perfect copy of each sequence, whether synthesized on columns or arrays, to use DNA in any just about application (except where you need to drive the equilibrium or reaction kinetics). Column synthesis gives you many more copies (i.e., more mass per sequence) than array synthesis. In principle — ignoring the efficiency of the chemical reactions — you could dial down the feature size on arrays until you were synthesizing just one copy per sequence. But then it would become exceedingly important to keep track of that one copy through successive fluidic operations, which sounds like a quite difficult prospect. So whatever the final form factor, an instrument needs to produce sufficient copies per sequence to be useful, but not so many that resources are wasted on unnecessary redundancy/degeneracy.

Just for shits and giggles, and because array synthesis could be important for assembling the hypothetical synthetic human genome, this all works out to be enough DNA to assemble 833 human duplex genomes per year, or 3 per day, in the absence of any other competing uses, of which there are obviously many. Also if you don't screw up and waste some of the DNA, which is inevitable. Finally, at a density of ~1 bit/base, this is enough to annually store 5 TB of data, or the equivalent of one very beefy laptop hard drive.

And so, if you have access to the entire global supply of single stranded oligonucleotides, and you have an encoding/decoding and sequencing strategy that can handle significant variations in length and high error rates at scale, you can store enough HD movies and TV to capture most of the new, good stuff that HollyBollyWood churns out every year. Unless, of course, you also need to accommodate the tastes and habits of a tween daughter, in which case your storage budget is blown for now and evermore no matter how much capacity you have at hand. Not to mention your wallet. Hey, put down the screen and practice the clarinet already. Or clean up your room! Or go to the dojo! Yeesh! Kids these days! So many exclamations!

Where was I?

Now that we have some rough numbers in hand, we can try to say something about the future. Based on my experience working on the Microsoft/UW DNA data storage project, I have become convinced that this technology is coming, and it will be based on massive increases in the supply of synthetic DNA. To compete with an existing tape drive (see the last few 'graphs of this post), able to read and write ~2 Gbits a second, a putative DNA drive would need to be able to read and write ~2 GBases per second, or ~183 Pbits/day, or the equivalent of ~10,000 human genomes a day — per instrument/device. Based on the guesstimate above, which gave a global throughput of just 3 human genomes per day, we are waaaay below that goal.

To be sure, there is probably some demand for a DNA storage technology that can work at lower throughputs: long term cold storage, government archives, film archives, etc. I suspect, however, that the many advantages of DNA data storage will attract an increasing share of the broader archival market once the basic technology is demonstrated on the market. I also suspect that developing the necessary instrumentation will require moving away from the existing chemistry to something new and different, perhaps enzymatically controlled synthesis, perhaps even with the aid of the still hypothetical DNA "synthase", which I first wrote about 17 years ago.

In any event, based on the limited numbers available today, it seems likely that the current oligo array industry has a long way to go before it can supply meaningful amounts of DNA for storage. It will be interesting to see how this all evolves.

A Few Thoughts and References Re Conservation and Synthetic Biology

Yesterday at Synthetic Biology 7.0 in Singapore, we had a good discussion about the intersection of conservation, biodiversity, and synthetic biology. I said I would post a few papers relevant to the discussion, which are below.

These papers are variously: the framing document for the original meeting at the University of Cambridge in 2013 (see also "Harry Potter and the Future of Nature"), sponsored by the Wildlife Conservation Society; follow on discussions from meetings in San Francisco and Bellagio; and my own efforts to try to figure out how quantify the economic impact of biotechnology (which is not small, especially when compared to much older industries) and the economic damage from invasive species and biodiversity loss (which is also not small, measured as either dollars or jobs lost). The final paper in this list is my first effort to link conservation and biodiversity with economic and physical security, which requires shifting our thinking from the national security of nation states and their political boundaries to the natural security of the systems and resources that those nation states rely on for continued existence.

"Is It Time for Synthetic Biodiversity Conservation?", Antoinette J. Piaggio1, Gernot Segelbacher, Philip J. Seddon, Luke Alphey, Elizabeth L. Bennett, Robert H. Carlson, Robert M. Friedman, Dona Kanavy, Ryan Phelan, Kent H. Redford, Marina Rosales, Lydia Slobodian, Keith WheelerTrends in Ecology & Evolution, Volume 32, Issue 2, February 2017, Pages 97–107

Robert Carlson, "Estimating the biotech sector's contribution to the US economy", Nature Biotechnology, 34, 247–255 (2016), 10 March 2016

Kent H. Redford, William Adams, Rob Carlson, Bertina Ceccarelli, “Synthetic biology and the conservation of biodiversity”, Oryx, 48(3), 330–336, 2014.

"How will synthetic biology and conservation shape the future of nature?", Kent H. Redford, William Adams, Georgina Mace, Rob Carlson, Steve Sanderson, Framing Paper for International Meeting, Wildlife Conservation Society, April 2013.

"From national security to natural security", Robert Carlson, Bulletin of the Atomic Scientists, 11 Dec 2013.

Warning: Construction Ahead

I am migrating from Movable Type to Squarespace. There was no easy way to do this. Undoubtedly, there are presently all sorts of formatting hiccups, lost media and images, and broken links. If you are looking for something in particular, use the Archive or Search tabs.

If you have a specific link you are trying to follow, and it has dashes between words, try replacing them with underscores. E.g., instead of "www.synthesis.cc/x-y-z", try "www.synthesis.cc/x_y_z". If the URL ends in "/x.html", try replacing that with "/x/".

I will be repairing links, etc., as I find them.

Late Night, Unedited Musings on Synthesizing Secret Genomes

By now you have probably heard that a meeting took place this past week at Harvard to discuss large scale genome synthesis. The headline large genome to synthesize is, of course, that of humans. All 6 billion (duplex) bases, wrapped up in 23 pairs of chromosomes that display incredible architectural and functional complexity that we really don't understand very well just yet. So no one is going to be running off to the lab to crank out synthetic humans. That 6 billion bases, by the way, just for one genome, exceeds the total present global demand for synthetic DNA. This isn't happening tomorrow. In fact, synthesizing a human genome isn't going to happen for a long time.

But, if you believe the press coverage, nefarious scientists are planning pull a Frankenstein and "fabricate" a human genome in secret. Oh, shit! Burn some late night oil! Burn some books! Wait, better — burn some scientists! Not so much, actually. There are a several important points here. I'll take them in no particular order.

First, it's true, the meeting was held behind closed doors. It wasn't intended to be so, originally. The rationale given by the organizers for the change is that a manuscript on the topic is presently under review, and the editor of the journal considering the manuscript made it clear that it considers the entire topic under embargo until the paper is published. This put the organizers in a bit of a pickle. They decided the easiest way to comply with the editor's wishes (which were communicated to the authors well after the attendees had made travel plans) was to hold the meeting under rules even more strict than Chatham House until the paper is published. At that point, they plan to make a full record of the meeting available. It just isn't a big deal. If it sounds boring and stupid so far, it is. The word "secret" was only introduced into the conversation by a notable critic who, as best I can tell, perhaps misconstrued the language around the editor's requirement to respect the embargo. A requirement that is also boring and stupid. But, still, we are now stuck with "secret", and all the press and bloggers who weren't there are seeing Watergate headlines and fame. Still boring and stupid.

Next, It has been reported that there were no press at the meeting. However, I understand that there were several reporters present. It has also been suggested that the press present were muzzled. This is a ridiculous claim if you know anything about reporters. They've simply been asked to respect the embargo, which so far they are doing, just like they do with every other embargo. (Note to self, and to readers: do not piss off reporters. Do not accuse them of being simpletons or shills. Avoid this at all costs. All reporters are brilliant and write like Hemingway and/or Shakespeare and/or Oliver Morton / Helen Branswell / Philip Ball / Carl Zimmer / Erica Check-Hayden. Especially that one over there. You know who I mean. Just sayin'.)

How do I know all this? You can take a guess, but my response is also covered by the embargo.

Moving on: I was invited to the meeting in question, but could not attend. I've checked the various associated correspondence, and there's nothing about keeping it "secret". In fact, the whole frickin' point of coupling the meeting to a serious, peer-reviewed paper on the topic was to open up the conversation with the public as broadly as possible. (How do you miss that unsubtle point, except by trying?) The paper was supposed to come out before, or, at the latest, at the same time as the meeting. Or, um, maybe just a little bit after? But, whoops. Surprise! Academic publishing can be slow and/or manipulated/politicized. Not that this happened here. Anyway, get over it. (Also: Editors! And, reviewers! And, how many times will I say "this is the last time!")

(Psst: an aside. Science should be open. Biology, in particular, should be done in the public view and should be discussed in the open. I've said and written this in public on many occasions. I won't bore you with the references. [Hint: right here.] But that doesn't mean that every conversation you have should be subject to review by the peanut gallery right now. Think of it like a marriage/domestic partnership. You are part of society; you have a role and a responsibility, especially if you have children. But that doesn't mean you publicize your pillow talk. That would be deeply foolish and would inevitably prevent you from having honest conversations with your spouse. You need privacy to work on your thinking and relationships. Science: same thing. Critics: fuck off back to that sewery rag in — wait, what was I saying about not pissing off reporters?)

Is this really a controversy? Or is it merely a controversy because somebody said it is? Plenty of people are weighing in who weren't there or, undoubtedly worse from their perspective, weren't invited and didn't know it was happening. So I wonder if this is more about drawing attention to those doing the shouting. That is probably unfair, this being an academic discussion, full of academics.

Secondly (am I just on secondly?), the supposed ethical issues. Despite what you may read, there is no rush. No human genome, nor any human chromosome, will be synthesized for some time to come. Make no mistake about how hard a technical challenge this is. While we have some success in hand at synthesizing yeast chromosomes, and while that project certainly serves as some sort of model for other genomes, the chromatin in multicellular organisms has proven more challenging to understand or build. Consequently, any near-term progress made in synthesizing human chromosomes is going to teach us a great deal about biology, about disease, and about what makes humans different from other animals. It is still going to take a long time. There isn't any real pressing ethical issue to be had here, yet. Building the ubermench comes later. You can be sure, however, that any federally funded project to build the ubermench will come with a ~2% set aside to pay for plenty of bioethics studies. And that's a good thing. It will happen.

There is, however, an ethical concern here that needs discussing. I care very deeply about getting this right, and about not screwing up the future of biology. As someone who has done multiple tours on bioethics projects in the U.S. and Europe, served as a scientific advisor to various other bioethics projects, and testified before the Presidential Commission on Bioethical Concerns (whew!), I find that many of these conversations are more about the ethicists than the bio. Sure, we need to have public conversations about how we use biology as a technology. It is a very powerful technology. I wrote a book about that. If only we had such involved and thorough ethical conversations about other powerful technologies. Then we would have more conversations about stuff. We would converse and say things, all democratic-like, and it would feel good. And there would be stuff, always more stuff to discuss. We would say the same things about that new stuff. That would be awesome, that stuff, those words. <dreamy sigh> You can quote me on that. <another dreamy sigh>

But on to the technical issues. As I wrote last month, I estimate that the global demand for synthetic DNA (sDNA) to be 4.8 billion bases worth of short oligos and ~1 billion worth of longer double-stranded (dsDNA), for not quite 6 Gigabases total. That, obviously, is the equivalent of a single human duplex genome. Most of that demand is from commercial projects that must return value within a few quarters, which biotech is now doing at eye-popping rates. Any synthetic human genome project is going to take many years, if not decades, and any commercial return is way, way off in the future. Even if the annual growth in commercial use of sDNA were 20% — which is isn't — this tells you, dear reader, that the commercial biotech use of synthetic DNA is never, ever, going to provide sufficient demand to scale up production to build many synthetic human genomes. Or possibly even a single human genome. The government might step in to provide a market to drive technology, just as it did for the human genome sequencing project, but my judgement is that the scale mismatch is so large as to be insurmountable. Even while sDNA is already a commodity, it has far more value in reprogramming crops and microbes with relatively small tweaks than it has in building synthetic human genomes. So if this story were only about existing use of biology as technology, you could go back to sleep.

But there is a use of DNA that might change this story, which is why we should be paying attention, even at this late hour on a Friday night.

DNA is, by far, the most sophisticated and densest information storage medium humans have ever come across. DNA can be used to store orders of magnitude more bits per gram than anything else humans have come up with. Moreover, the internet is expanding so rapidly that our need to archive data will soon outstrip existing technologies. If we continue down our current path, in coming decades we would need not only exponentially more magnetic tape, disk drives, or flash memory, but exponentially more factories to produce these storage media, and exponentially more warehouses to store them. Even if this is technically feasible it is economically implausible. But biology can provide a solution. DNA exceeds by many times even the theoretical capacity of magnetic tape or solid state storage.

A massive warehouse full of magnetic tapes might be replaced by an amount of DNA the size of a sugar cube. Moreover, while tape might last decades, and paper might last millennia, we have found intact DNA in animal carcasses that have spent three-quarters of a million years frozen in the Canadian tundra. Consequently, there is a push to combine our ability to read and write DNA with our accelerating need for more long-term information storage. Encoding and retrieval of text, photos, and video in DNA has already been demonstrated. (Yes, I am working on one of these projects, but I can't talk about it just yet. We're not even to the embargo stage.) 

Governments and corporations alike have recognized the opportunity. Both are funding research to support the scaling up of infrastructure to synthesize and sequence DNA at sufficient rates.

For a “DNA drive” to compete with an archival tape drive today, it needs to be able to write ~2Gbits/sec, which is about 2 Gbases/sec. That is the equivalent of ~20 synthetic human genomes/min, or ~10K sHumans/day, if I must coin a unit of DNA synthesis to capture the magnitude of the change. Obviously this is likely to be in the form of either short ssDNA, or possibly medium-length ss- or dsDNA if enzymatic synthesis becomes a factor. If this sDNA were to be used to assemble genomes, it would first have to be assembled into genes, and then into synthetic chromosomes, a non trivial task. While this would be hard, and would to take a great deal of effort and PhD theses, it certainly isn't science fiction.

But here, finally, is the interesting bit: the volume of sDNA necessary to make DNA information storage work, and the necessary price point, would make possible any number of synthetic genome projects. That, dear reader, is definitely something that needs careful consideration by publics. And here I do not mean "the public", the 'them' opposed to scientists and engineers in the know and in the do (and in the doo-doo, just now), but rather the Latiny, rootier sense of "the people". There is no them, here, just us, all together. This is important.

The scale of the demand for DNA storage, and the price at which it must operate, will completely alter the economics of reading and writing genetic information, in the process marginalizing the use by existing multibillion-dollar biotech markets while at the same time massively expanding capabilities to reprogram life. This sort of pull on biotechnology from non-traditional applications will only increase with time. That means whatever conversation we think we are having about the calm and ethical development biological technologies is about to be completely inundated and overwhelmed by the relentless pull of global capitalism, beyond borders, probably beyond any control. Note that all the hullabaloo so far about synthetic human genomes, and even about CRISPR editing of embryos, etc., has been written by Western commentators, in Western press. But not everybody lives in the West, and vast resources are pushing development of biotechnology outside of the of West. And that is worth an extended public conversation.

So, to sum up, have fun with all the talk of secret genome synthesis. That's boring. I am going off the grid for the rest of the weekend to pester litoral invertebrates with my daughter. You are on your own for a couple of days. Reporters, you are all awesome, make of the above what you will. Also: you are all awesome. When I get back to the lab on Monday I will get right on with fabricating the ubermench for fun and profit. But — shhh — that's a secret.

On DNA and Transistors

Here is a short post to clarify some important differences between the economics of markets for DNA and for transistors. I keep getting asked related questions, so I decided to elaborate here.

But first, new cost curves for reading and writing DNA. The occasion is some new data gleaned from a somewhat out of the way source, the Genscript IPO Prospectus. It turns out that, while preparing their IPO docs, Genscript hired Frost & Sullivan to do market survey across much of life sciences. The Prospectus then puts Genscript's revenues in the context of the global market for synthetic DNA, which together provide some nice anchors for discussing how things are changing (or not).

So, with no further ado, Frost & Sullivan found that the 2014 global market for oligos was $241 million, and the global market for genes was $137 million. (Note that I tweeted out larger estimates a few weeks ago when I had not yet read the whole document.) Genscript reports that they received $35 million in 2014 for gene synthesis, for 25.6% of the market, which they claim puts them in the pole position globally. Genscript further reports that the price for genes in 2014 was $.34 per base pair. This sounds much too high to me, so it must be based on duplex synthesis, which would bring the linear per base cost down to $.17 per base, which sounds much more reasonable to me because it is more consistent with what I hear on the street. (It may be that Gen9 is shipping genes at $.07 per base, but I don't know anyone outside of academia who is paying that low a rate.) If you combine the price per base and the size of the market, you get about 1 billion bases worth of genes shipped in 2014 (so a million genes, give or take). This is consistent with Ginkgo's assertions saying that their 100 million base deal with Twist was the equivalent of 10% of the global gene market in 2015. For oligos, if you combine Genscript's reported average price per base, $.05, with the market size you get about 4.8 billion bases worth of oligos shipped in 2014. Frost & Sullivan thinks that from 2015 to 2019 the oligo market CAGR will be 6.6% and the gene synthesis market will come in at 14.7%.

For the sequencing, I have capitulated and put the NextSeq $1000 human genome price point on the plot. This instrument is optimized to sequence human DNA, and I can testify personally that sequencing arbitrary DNA is more expensive because you have to work up your own processes and software. But I am tired of arguing with people. So use the plot with those caveats in mind.

NOTE: Replaces prior plot with an error in sequencing price.

NOTE: Replaces prior plot with an error in sequencing price.

What is most remarkable about these numbers is how small they are. The way I usually gather data for these curves is to chat with people in the industry, mine publications, and spot check price lists. All that led me to estimate that the gene synthesis market was about $350 million (and has been for years) and the oligo market was in the neighborhood of $700 million (and has been for years).

If the gene synthesis market is really only $137 million, with four or 5 companies vying for market share, then that is quite an eye opener. Even if that is off by a factor of two or three, getting closer to my estimate of $350 million, that just isn't a very big market to play in. A ~15% CAGR is nothing to sneeze at, usually, and that is a doubling rate of about 5 years. But the price of genes is now falling by 15% every 3-4 years (or only about 5% annually). So, for the overall dollar size of the market to grow at 15%, the number of genes shipped every year has to grow at close to 20% annually. That's about 200 million additional bases (or ~200,000 more genes) ordered in 2016 compared to 2015. That seems quite large to me. How many users can you think of who are ramping up their ability to design or use synthetic genes by 20% a year? Obviously Ginkgo, for one. As it happens, I do know of a small number of other such users, but added together they do not come close to constituting that 20% overall increase. All this suggests to me that the dollar value of the gene synthesis market will be hard pressed to keep up with Frost & Sullivan estimate of 14.7% CAGR, at least in the near term. As usual, I will be happy to be wrong about this, and happy to celebrate faster growth in the industry. But bring me data.

People in the industry keep insisting that once the price of genes falls far enough, the ~$3 billion market for cloning will open up to synthetic DNA. I have been hearing that story for a decade. And then price isn't the only factor. To play in the cloning market, synthesis companies would actually have to be able to deliver genes and plasmids faster than cloning. Given that I'm hearing delivery times for synthetic genes are running at weeks, to months, to "we're working on it", I don't see people switching en mass to synthetic genes until the performance improves. If it costs more to have your staff waiting for genes to show up by FedEx than to have them bash the DNA by hand, they aren't going to order synthetic DNA.

And then what happens if the price of genes starts falling rapidly again? Or, forget rapidly, what about modestly? What if a new technology comes in and outcompetes standard phosphoramidite chemistry? The demand for synthetic DNA could accelerate and the total market size still might be stagnant, or even fall. It doesn't take much to turn this into a race to the bottom. For these and other reasons, I just don't see the gene synthesis market growing very quickly over the next 5 or so years.

Which brings me to transistors. The market for DNA is very unlike the market for transistors, because the role of DNA in product development and manufacturing is very unlike the role of transistors. Analogies are tremendously useful in thinking about the future of technologies, but only to a point; the unwary may miss differences that are just as important as the similarities.

For example, the computer in your pocket fits there because it contains orders of magnitude more transistors than a desktop machine did fifteen years ago. Next year, you will want even more transistors in your pocket, or on your wrist, which will give you access to even greater computational power in the cloud. Those transistors are manufactured in facilities now costing billions of dollars apiece, a trend driven by our evidently insatiable demand for more and more computational power and bandwidth access embedded in every product that we buy. Here is the important bit: the total market value for transistors has grown for decades precisely because the total number of transistors shipped has climbed even faster than the cost per transistor has fallen.

In contrast, biological manufacturing requires only one copy of the correct DNA sequence to produce billions in value. That DNA may code for just one protein used as a pharmaceutical, or it may code for an entire enzymatic pathway that can produce any molecule now derived from a barrel of petroleum. Prototyping that pathway will require many experiments, and therefore many different versions of genes and genetic pathways. Yet once the final sequence is identified and embedded within a production organism, that sequence will be copied as the organism grows and reproduces, terminating the need for synthetic DNA in manufacturing any given product. The industrial scaling of gene synthesis is completely different than that of semiconductors.