The most important paragraph of The Gene Factory

| No Comments | No TrackBacks
The most important paragraph of Michael Specter's story about BGI:

"In the United States and in the West, you have a certain way," [BGI President Jian Wang] continued, smiling and waving his arms merrily. "You feel you are advanced and you are the best. Blah, blah, blah. You follow all these rules and have all these protocols and laws and regulations. You need somebody to change it. To blow it up. For the last five hundred years, you have been leading the way with innovation. We are no longer interested in following."
[Given the mix-up in the publication date of 2015, I have now deleted the original post. I have appended the comments from the original post to the bottom of this post.]

It's time once again to see how quickly the world of biological technologies is changing. The story is mixed, in part because it is getting harder to find useful data, and in part because it is getting harder to generate appropriate metrics. 

Sequencing and synthesis productivity

I'll start with the productivity plot, as this one isn't new. For a discussion of the substantial performance increase in sequencing compared to Moore's Law, as well as the difficulty of finding this data, please see this post. If nothing else, keep two features of the plot in mind: 1) the consistency of the pace of Moore's Law and 2) the inconsistency and pace of sequencing productivity. Illumina appears to be the primary driver, and beneficiary, of improvements in productivity at the moment, especially if you are looking at share prices. It looks like the recently announced NextSeq and Hiseq instruments will provide substantially higher productivities (hand waving, I would say the next datum will come in another order of magnitude higher), but I think I need a bit more data before officially putting another point on the plot. Based on Eric Check Hayden's coverage at Nature, it seems that the new instruments should also provide substantial price improvements, which I get into below.

As for synthesis productivity, there have been no new commercially available instruments released for many years. sDNA providers are clearly pushing productivity gains in house, but no one outside those companies has access to productivity data.
Carlson_DNA_Prod_Feb2013.png
DNA sequencing and synthesis prices

The most important thing to notice about the plots below is that prices have stopped falling precipitously. If you hear or read anyone asserting that costs are falling exponentially, you can politely refer them to the data (modulo the putative performance of the new Illumina instruments). We might again see exponential price decreases, but that will depend on a combination of technical innovation, demand, and competition, and I refer the reader to previous posts on the subject. Note that prices not falling isn't necessarily bad and doesn't mean the industry is somehow stagnant. Instead, it means that revenues in these sectors are probably not falling, which will certainly be welcomed by the companies involved. As I described a couple of weeks ago, and in a Congressional briefing in November, revenues in biotech continue to climb steeply.

The second important thing to notice about these plots is that I have changed the name of the metric from "cost" to "price". Previously, I had decided that this distinction amounted to no real difference for my purposes. Now, however, the world has changed, and cost and price are very different concepts for anyone thinking about the future of DNA. Previously, there was at times an order of magnitude change in cost from year to year, and keeping track of the difference between cost and price didn't matter. In a period when change is much slower, that difference becomes much more important. Moreover, as the industry becomes larger, more established, and generally more important for the economy, we should all take more care in distinguishing between concepts like cost to whom and price for whom.

In the plot that follows, the price is for finished, not raw, sequencing.
Carlson_Price_Seq_Synth_Feb2014.png
And here is a plot only of oligo and gene-length DNA:
Carlson_Price_sDNA_Feb2014.png
What does all this mean? Illumina's instruments are now responsible for such a high percentage of sequencing output that the company is effectively setting prices for the entire industry. Illumina is being pushed by competition to increase performance, but this does not necessarily translate into lower prices. It doesn't behoove Illumina to drop prices at this point, and we won't see any substantial decrease until a serious competitor shows up and starts threatening Illumina's market share. The absence of real competition is the primary reason sequencing prices have flattened out over the last couple of data points.

I pulled the final datum on the sequencing curve from the NIH; the title on the NIH curve is "cost", but as it includes indirect academic costs I am going to fudge and call it "price". I notice that the NIH is now publishing two sequencing cost curves, and I'll bet that the important differences between them are too subtle for most viewers. One curve shows cost per megabase of raw sequence - that is, data straight off the instruments - and the other curve shows cost per finished human genome (assuming ~30X coverage of 3x10^9 bases). The cost per base of that finished sequencing is a couple orders of magnitude higher than the cost of the raw data. On the Hiseq X data sheet, Illumina has boldly put a point on the cost per human genome curve at $1000. But I have left it off the above plot for the time being; the performance and cost claimed by Illumina are just for human genomes rather than for arbitrary de novo sequencing. Mick Watson dug into this, and his sources inside Illumina claim that this limitation is in the software, rather than the hardware or the wetware, in which case a relatively simple upgrade could dramatically expand the utility of the instrument. Or perhaps the "de novo sequencing level" automatically unlocks after you spend $20 million in reagents. (Mick also has some strong opinions about the role of competition in pushing the development of these instruments, which I got into a few months ago.) 

Synthesis prices have slowed for entirely different reasons. Again, I have covered this ground in many other posts, so I won't belabor it here. 

Note that the oligo prices above are for column-based synthesis, and that oligos synthesized on arrays are much less expensive. However, array synthesis comes with the usual caveat that the quality is generally lower, unless you are getting your DNA from Agilent, which probably means you are getting your dsDNA from Gen9

Note also that the distinction between the price of oligos and the price of double-stranded sDNA is becoming less useful. Whether you are ordering from Life/Thermo or from your local academic facility, the cost of producing oligos is now, in most cases, independent of their length. That's because the cost of capital (including rent, insurance, labor, etc) is now more significant than the cost of goods. Consequently, the price reflects the cost of capital rather than the cost of goods. Moreover, the cost of the columns, reagents, and shipping tubes is certainly more than the cost of the atoms in the sDNA you are ostensibly paying for. Once you get into longer oligos (substantially larger than 50-mers) this relationship breaks down and the sDNA is more expensive. But, at this point in time, most people aren't going to use longer oligos to assemble genes unless they have a tricky job that doesn't work using short oligos.

Looking forward, I suspect oligos aren't going to get much cheaper unless someone sorts out how to either 1) replace the requisite human labor and thereby reduce the cost of capital, or 2) finally replace the phosphoramidite chemistry that the industry relies upon. I know people have been talking about new synthesis chemistries for many years, but I have not seen anything close to market.

Even the cost of double-stranded sDNA depends less strongly on length than it used to. For example, IDT's gBlocks come at prices that are constant across quite substantial ranges in length. Moreover, part of the decrease in price for these products is embedded in the fact that you are buying smaller chunks of DNA that you then must assemble and integrate into your organism of choice. The longer gBlocks come in as low as ~$0.15/base, but you have to spend time and labor in house in order to do anything with them. Finally, so far as I know, we are still waiting for Gen9 and Cambrian Genomics to ship DNA at the prices they have suggested are possible. 

How much should we care about the price of sDNA?

I recently had a chat with someone who has purchased and assembled an absolutely enormous amount of sDNA over the last decade. He suggested that if prices fell by another order of magnitude, he could switch completely to outsourced assembly. This is an interesting claim, and potentially an interesting "tipping point". However, what this person really needs is not just sDNA, but sDNA integrated in a particular way into a particular genome operating in a particular host. And, of course, the integration and testing of the new genome in the host organism is where most of the cost is. Given the wide variety of emerging applications, and the growing array of hosts/chassis, it isn't clear that any given technology or firm will be able to provide arbitrary synthetic sequences incorporated into arbitrary hosts.

Consequently, as I have described before, I suspect that we aren't going to see a huge uptake in orders for sDNA until cheaper genes and circuits are coupled to reductions in cost for the rest of the build, test, and measurement cycle. Despite all the talk recently about organism fabs and outsourced testing, I suggest that what will really make a difference is providing every lab and innovator with adequate tools and infrastructure to do their own complete test and measurement. We should look to progress in pushing all facets of engineering capacity for biological systems, not just on reducing the cost of reading old instructions and writing new ones.

----

Comments from original post follow.

George Church:

"the performance and cost claimed by Illumina are just for human genomes rather than for arbitrary de novo sequencing."  --Rob
 (genome.gov/images/content/cost_per_genome.jpg )  But most of the curve has been based on human genome sequencing until now.  Why exclude human, rather than having a separate curve for "de novo"?  Human genomes constitute a huge and compelling market.    -- George  

"oligos synthesized on arrays are much less expensive. However, array synthesis comes with the usual caveat that the quality is generally lower, unless you are getting your DNA from Agilent"  -- Rob
So why exclude Agilent from the curve? -- George

"we aren't going to see a huge uptake in orders for sDNA until cheaper genes and circuits are coupled to reductions in cost for the rest of the build, test, and measurement cycle." --Rob
Is this the sort of enabling technology needed?: arep.med.harvard.edu/pdf/Goodman_Sci_13.pdf

My response to George:

George,

Thanks for your comments. I am not sure what you might mean by "most of the curve has been based on human genome sequencing". From my first efforts in 2000 (published initially in 2003), I have tried to use data that is more general. It is true that human genomes constitute a large market, but they aren't the only market. By definition, if you are interested in sequencing or building any other organism, then instruments that specialize in sequencing humans are of limited relevance. It may also be true that the development of new sequencing technologies and instruments has been driven by human sequencing, but that is beside the point. It may even be true that the new Illumina systems can be just as easily used to sequencing mammoths, but that isn't happening yet. I have been doing my best to track the cost, and now the price, of de novo sequencing.

As I mention in the post, it is time that everyone, including me, started being more careful about the difference between cost and price. This brings me to oligos.

Agilent oligos are a special case. So far as I know, only Gen9 is using Agilent oligos as raw material to build genes. As you know, Cambrian Genomics is using arrays produced using technology developed at Combimatrix, and in any event isn't yet on the market. It is my understanding that, aside from Gen9, Agilent's arrays are primarily used for analysis rather than for building anything. Therefore, the market *price* of Agilent oligos is irrelevant to anyone except Gen9.

If the *cost* of Agilent oligos to Gen9 were reflected in the *price* of the genes sold by Gen9, or if those oligos were more broadly used, then I would be more interested in including them on the price curve. So far as I am aware, the price for the average customer at Gen9 is in the neighborhood of $.15-.18 per base. I've heard Drew Endy speak of a "friends and family" (all academics?) price of ~$.09 from Gen9, but that does not appear to be available to all customers, so I will leave it off the plot for now.

All this comes down to the obvious fact that, as the industry matures and becomes supported more by business-to-business sales rather than being subsidized by government grants and artificially cheap academic labor, the actual cost and actual price start to matter a great deal. Deextinction, in particular, might be an example where an academic/non-profit project might benefit from low cost (primarily cost of goods and cost of labor) that would be unachievable on the broader market where the price would set by 1) keeping the doors of a business open, 2) return on capital, and 3) competition, not necessarily in that order. The academic cost of developing, demonstrating, and even using technologies is almost always very different from the eventual market price of those technologies.

The bottom line is that, from day one, I have been trying to understand the impact of biological technologies on the economy. This impact is most directly felt, and tracked, via the price that most customers pay for goods and services. I am always looking to improve the metrics I use, and if you have suggestions about how to do this better I am all ears.

Finally, yes, the papers you cite (above and on the Deexctinction mailing list) describe the sort of thing the could help reduce engineering costs. Ultimately technologies like those will reduce the market price of products resulting from that engineering process. I look forward to seeing more, and also to seeing this technology utilized in the market.

Thanks again for your thoughtful questions.
I recently rediscovered this piece, which I had forgotten I had written as a submission to the development process for the National Bioeconomy Blueprint. The #GMDP numbers are obviously a bit out of date: in 2012, US revenues from genetically modified systems reached at least $350 billion, or ~2.5% of GDP.

"Building a 21st Century Bioeconomy: Fostering Economic and Physical Security Through Public-Private Partnerships and a National Network of Community Labs" (click image for PDF).



The bioeconomy continues to emerge as a significant component of the U.S. economy. Domestic revenues from genetically modified systems are growing at approximately 15% 10% annually, much faster than the economy as a whole. Around the world an ever larger number of countries have articulated strategies that explicitly identify biotechnology as critical to economic growth. The U.K., for example, has gone so far as to explicitly name synthetic biology as one of the "eight great technologies" that will propel economic growth and has announced more than ₤160 M (or about 2 per capita) for research, development, and commercialization of synthetic biology.


GMDP


As I announced during a Congressional Briefing in November, the total 2012 U.S. revenues from genetically modified systems, hereafter the Genetically Modified Domestic Product (GMDP), reached at least $350 billion, the equivalent of approximately 2.5% of GDP, up from $300 billion in 2010. For comparison, according to IHS iSuppli, the 2012 global revenues for the semiconductor industry amounted to $322 billion. Remarkably, assuming a 2011-12 GDP annual growth rate of 2.5%, the two year, $50 billion increase in GMDP accounted for almost 7% of total U.S. GDP growth.


Due to differences in regulatory structure, financing, and, consequently, pace of development and commercialization across the industry, the GMDP naturally breaks down into the sub-sectors of biotech drugs (biologics), GM crops, and industrial biotechnology.


Biologics


In 2012, global revenues from biologics reached $125 billion. In the U.S., domestic revenues from biologics reached more nearly $100 billion, although this figure includes $28 billion in revenues accruing to companies such as Genentech, Zymogenetics, and Genzyme that are now wholly owned by overseas entities. Domestic U.S. clinical demand for biologics rose about 5%, reaching almost $54 billion in sales in 2011, indicating that the U.S. continues to enjoy a substantial positive balance of payments by biologics sold in international markets.


GM Crops


In 2012, global planting of GM crops increased by 6%, reaching a total of 170 million hectares. Of the 17 million farmers chose to plant GM crops, more that 15 million were "resource poor farmers in developing countries". In the U.S., where farmers planted 40% of the total GM area, GM corn, cotton, and soy held steady at approximately 90% penetration, with GM sugar beets planted at about the 95% level. Based on average crop revenue figures compiled by the USDA, I estimate that in 2012 the combination of biotech seeds and farm-level revenues reached $125 billion in the U.S.


Industrial Biotechnology


U.S. revenues from industrial biotech (fuels, enzymes, and materials) reached at least $125 billion in 2012. The accuracy of this estimate continues to suffer in comparison to revenues from biologics and GM crops due to the quality of available data. For the purposes of this post, I am temporarily relying on an estimate provided by Agilent Technologies, as recently described by Darlene Solomon. The internal breakdown of the $125 billion in business-to-business sales is quite interesting: $66 billion in chemicals, $30 billion in biofuels, $16 billion in biologics feedstocks, $12 billion in the food and ag, and $1 billion in emerging markets. (Agilent did not provide any greater specificity on how these areas were defined or how the numbers were derived.) As I have been predicting for several years, it appears that chemicals have eclipsed fuels as the largest component of industrial biotech revenues. Finally, note that, at the level of consumers, the ultimate economic impact of these revenues is probably larger than $125 billion.


More Work To Do


It is important to recognize that the preceding estimates are relatively inaccurate compared to those describing other parts of the U.S. economy. While I have previously estimated revenues from biopharmaceuticals ("biologics") and GM crops using corporate financial reporting and data collected by the USDA, respectively, revenues from industrial biotechnology are poorly constrained because no relevant data is gathered by the U.S. government or provided by industry (see previous reports on this topic for an in depth discussion). The Agilent numbers are a welcome additional bit of information, but we really need to have better data for, and analysis of, the GMDP in order to understand the larger impacts on our economy and society. Among other details, we need to understand the skill base, employment, and also generate historical estimates in order to sort out what the longer term trends look like. As I mentioned during my presentation at SynBioBeta in November, I am launching a new non-profit to take up this task. More on this soon.


#GMDP = Genetically Modified Domestic Product

BAS: From national security to natural security

| No Comments | No TrackBacks
Here is my recent essay in Bulletin of the Atomic Scientists: "From national security to natural security".

The first few paragraphs:

From 10,000 meters up, the impact of humans on the Earth is clear. Cities spanning kilometers are connected by roadways stretching to the horizon. Crowding the spaces in between are fields that supply food and industrial feedstock to satisfy a variety of human hungers. These fields feed humanity. Through stewardship we maintain their productivity and thus sustain societies that extend around the globe; if these fields fall into ill health, or if we push them into sickness, we risk the fate of those same societies.

Humans have a long history of modifying the living systems they rely on. Forests in Europe and North America have been felled for timber and have regrown, while other large tracts of land around the world have been completely cleared for use in agriculture. The animals and plants we humans eat on a regular basis have been selected and bred over millennia to suit the human palate and digestive tract. All these crops and products are shipped and consumed globally to satisfy burgeoning demand.

Our technology and trade thereby support a previously unimaginable quality of life for a previously impossible number of people. Humanity has kept Malthus at bay through centuries of growth. Yet our increasing numbers impose a load that is now impacting nature's capacity to support human societies. This stress comes at a time when ever-larger numbers of humans demand more: more food, more clean water, more energy, more education, more entertainment, more more.

Increasing human demand raises the question of supply, and of the costs of meeting that supply. How we choose to spend or to conserve land, water, and air to meet our needs clearly impacts other organisms that also depend on these resources. Nature has intrinsic value for many people in the form of individual species, ecosystems, and wilderness; nature also constitutes critical infrastructure in the form of ecosystems that keep us alive. That infrastructure has quantifiable economic value. Consequently, nature, and the change we induce in it, is clearly interwoven with our economy. That is, the security and health of human societies depends explicitly upon the security and health of natural systems. Therefore, as economic security is now officially considered as part and parcel of national security strategy, it is time to expand our understanding of national security to include natural security.

Here is the main page for the 5 November, 2013 Congressional Briefing in the U.S. Senate, Tooling the U.S. Bioeconomy: Synthetic Biology. Speakers included Mary Maxon, Darlene Solomon, and Chris Voigt. Here is my contribution:

 

Rob Carlson, Components and Potential of the Growing Bioeconomy from ACS Science & the Congress on Vimeo.


And here is the Q&A following the presentations, during which we got into issues of risk, security, public acceptance, etc:

ASU SOLS Meets Society, 28-29 October

| No Comments | No TrackBacks
I'll be at ASU in a few weeks for the "SOLS Meets Society" Lecture Series.  See you there.

ASU_Carlson_Flyer.jpg

Harry Potter and The Future of Nature

| 1 Comment | No TrackBacks
How will Synthetic Biology and Conservation Shape the Future of Nature?  Last month I was privileged to take part in a meeting organized by The Wildlife Conservation Society to consider that question.  Here is the framing paper (PDF), of which I am a co-author.  There will be a follow-up paper in the coming months.  I am still mulling over what I think happened during the meeting, and below are a few observations that I have managed to settle on so far.  Others have written their own accounts.  Here is a summary from Julie Gould, riffing on an offer that Paul Freemont made to conservation biologists at the close of the meeting, "The Open Door".  Ed Gillespie has a lovely, must-read take on Pandora's Box, cane toads, and Emily Dickenson, "Hope is the thing with feathers".  Cristian Samper, the new head of the Wildlife Conservation Society was ultimately quite enthusiastic: Jim Thomas of ETC, unsurprisingly, not so much.

The meeting venue was movie set-like Cambridge.  My journey took me through King's Cross, with its requisite mock-up of a luggage trolley passing through the wall at platform nine and three-quarters.  So I am tempted to style parts of the meeting as a confrontation between a boyish protagonist trying to save the world and He Who Must Not Be Named.  But my experience at the meeting was that not everyone was able to laugh at a little tension-relieving humor, or even to recognize that humor.  Thus the title of this post is as much as I will give in temptation.

How Can SB and CB Collaborate?

I'll start with an opportunity that emerged during the week, exactly the sort of thing you hope would come from introducing two disciplines to each other.  What if synthetic biology could be used as a tool to aid in conservation efforts, say to buttress biodiversity against threats?  If the ongoing, astonishing loss of species were an insufficient motivation to think about this possibility, now some species that humans explicitly rely upon economically are under threat.    Synthetic biology might - might! - be able to offer help in the form of engineering species to be more robust in the face of a changing environment, such as enabling corals to cope with increases in water temperature and acidity, or it perhaps via intervening in a host-prey relationship, such as that between bats and white-nose disease or between bees and their mites and viruses.

The first thing to say here is that if the plight of various species can be improved through changes in human behavior then we should by all means work toward that end.  The simpler solution is usually the better solution.  For example, it might be a good idea to stop using those pesticides and antibiotics that appear to create more problems than they solve when introduced into the environment.  Moreover, at the level of the environment and the economy, technological fixes are probably best reserved until we try changes in human behavior.  After all, we've mucked up such fixes quite a few times already.  (All together now: "Cane Toad Blues".)  But what if the damage is too far along and cannot be addressed by changes in behavior?  We should at least consider the possibility that a technological fix might be worth a go, if for no other reason that to figure out how to create a back up plan.  Given the time scales involved in manipulating complex organisms, exploring the option of a back-up plan means getting started early.  It also means thoughtfully considering which interventions would be most appropriate and urgent, where part of the evaluation should probably involve asking whether changes in human behavior are likely to have any effect.  In some cases, a technical solution is likely to be our only chance.

First up: corals.

We heard from Stanford's Steve Palumbi on work to understand the effects of climate change on corals in the South Pacific.  Temperature and acidity - two parameters already set on long term changes - are already affecting coral health around the globe.  But it turns out that in the lab some corals can handle remarkably difficult environmental conditions.  What if we could isolate the relevant genetic circuits and, if necessary, transplant them into other species, or turn them on if they are already widespread?  My understanding of Professor Palumbi's talk is that it is not yet clear why some corals have the pathway turned on and some do not.  So, first up, a bunch of genetics, molecular biology, and field biology to figure out why the corals do what they do.  After that, if necessary, it seems that it would be worth exploring whether other coral species can be modified to use the relevant pathways.  Corals are immensely important for the health of both natural ecosystems and human economies; we should have a back-up plan, and synthetic biology could certainly contribute.

Next up: bats.

Bats are unsung partners of human agriculture, and they contribute an estimated $23 billion annually to U.S. farmers by eating insects and pollinating various plants.  Here is nice summary article from The Atlantic by Stephanie Gruner Buckely on the impact upon North American bats of white nose syndrome.  The syndrome, caused by a fungus evidently imported from Europe, has already killed so many bats that we may see an impact on agriculture as soon as this year.  European bats are resistant to the fungus, so one option would be to try to introduce the appropriate genes into North American bats via standard breeding.  However, bats breed very slowly, usually only having one pup a year, and only 5 or so pups in a lifetime.  Given the mortality rate due to white nose syndrome, this suggests breeding is probably too slow to be useful in conservation efforts.  What if synthetic biology could be used to intervene in some way, either to directly attack the non-native fungus or to interfere with its attack on bats.  Obviously this would be a hard problem to take on, but both biodiversity and human welfare would be improved by making progress here.

And now: bees.

If you eat, you rely on honeybees.  Due to a variety of causes, bee populations have fallen to the point where food crops are in jeopardy.  Entomologist Dennis vanEngelstorp, quoted in Wired, warns "We're getting closer and closer to the point where we don't have enough bees in this country to meet pollination demands.  If we want to grow fruits and nuts and berries, this is important.  One in every three bites [of food consumed in the U.S.] is directly or indirectly pollinated by bees."  Have a look at the Wired article for a summary of the constellation of causes of Colony Collapse Disorder, or CCD -- they are multifold and interlocking.  Obviously, the first thing to do is to stop making the problem worse; Europe has banned a class of pesticide that is exceptionally hard on honeybees, though the various sides in this debate continue to argue about whether that will make any difference.  This change in human behavior may have some impact, but most experts agree we need to do more.  Efforts are underway to breed bees that are resistant to both pesticides and to particular mites that prey on bees and that transmit viruses between bees.  Applying synthetic biology here might be the hardest task of all, given the complexity of the problem.  Should synthetic biologists focus on boosting apian immune systems?  Should they focus on the mite?  Apian viruses?  It sounds very difficult.  But with such a large fraction of our food supply dependent upon healthy bees, it also seems pretty clear that we should be working on all fronts to sort out potential solutions.

A Bit of Good News

Finally, a problem synthetic biologists are already working to solve: malaria.  The meeting was fortunate to hear directly from Jay Keasling.  Keasling presented progress on a variety of fronts, but the most striking was his announcement that Sanofi-Aventis has produced substantially more artemisinin this year than planned, marking real progress in producing the best malaria drug extant using synthetic biology rather than by purifying it from plants.  Moreover, he announced that Sanofi and OneWorldHealth are likely to take over the entire world production of artemisinin.  The original funding deal between The Gates Foundation, OneWorldHealth, Amyris, and Sanofi required selling at cost.  The collaboration has worked very hard at bringing the price down, and now it appears that they can simply outcompete the for-profit pricing monopoly.

The stated goal of this effort is to reduce the cost of malaria drugs and provide inexpensive cures to the many millions of people who suffer from malaria annually.  Currently, the global supply fluctuates, as, consequently, do prices, which are often well above what those afflicted can pay.  A stable, high volume source of the drug would reduce prices and also reduce the ability of middle-men to sell doctored, diluted, or mis-formulated artemisinin, all of which are contributing to a rise of resistant pathogens.

There is a potential downside to this project.  If Sanofi and OneWorldHealth do corner the market on artemisinin, then farmers who currently grow artemisia will no longer have that option, at least for supplying the artemisinin market.  That might be a bad thing, so we should at least ask the question of whether the world is a better place with artemisinin production done in vats or derived from plants.  This question can be broken into two pieces: 1) what is best for the farmers? and 2) what is best for malaria sufferers?  It turns out these questions have the same answer.

There is no question that people who suffer from malaria will be better off with artemisinin produced in yeast by Sanofi.  Malaria is a debilitating disease that causes pain, potentially death, and economic hardship.  The best estimates are that countries in which malaria is endemic suffer a hit to GDP growth of 1.3% annually compared to non-malarious countries.  Over just a few years this yearly penalty swamps all the foreign aid those countries receive; I've previously argued that eliminating malaria would be the biggest humanitarian achievement in history and would make the world a much safer place.  Farmers in malarious countries are the worst hit, because the disease prevents them from getting into the fields to work.  I clashed in public over this with Jim Thomas around our respective testimonies in front of the Presidential Bioethics Commission a couple of years ago.  Quoting myself briefly from the relevant blog post,

The human cost of not producing inexpensive artemisinin in vats is astronomical.  If reducing the burden of malaria around the world on almost 2 billion people might harm "a few thousand" farmers, then we should make sure those farmers can make a living growing some other crop.  We can solve both problems.  ...Just one year of 1.3% GDP growth recovered by reducing (eliminating?) the impact of malaria would more than offset paying wormwood farmers to grow something else.  There is really no argument to do anything else.

For a bit more background on artemisinin supply and pricing, and upon the apparent cartel in control of pricing both the drug and the crop, see this piece in Nature last month by Mark Peplow.  I was surprised to learn that that the price of artemisia is set by a small group that controls production of the drug.  This group, unsurprisingly, is unhappy that they may lose control of the market for artemisinin to a non-profit coalition whose goal is to eliminate the disease.  Have a look at the chart titled "The Cost of Progress", which reveals substantial price fluctuations, to which I will return below.

Mr. Thomas responded to Keasling's announcement in Cambridge with a broadside in the Guardian UK against Keasling and synthetic biology more generally.  Mr. Thomas is always quick to shout "What about the farmers?"  Yet he is rather less apt to offer actual analysis of what farmers actually gain, or lose, by planting artemisia.

The core of the problem for farmers is in that chart from Nature, which shows that artemisinin has fluctuated in price by a factor of 3 over the last decade.  Those fluctuations are bad for both farmers and malaria sufferers; farmers have a hard time knowing whether it makes economic sense to plant artemisia, which subsequently means shortages if farmers don't plant enough.  Shortages mean price spikes, which causes more farmers to plant, which results in oversupply, which causes the price to plunge, etc.  You'll notice that Mr. Thomas asserts that farmers know best, but he never himself descends to the level of looking at actual numbers, and whether farmers benefit by growing artemisia.  The numbers are quite revealing.

Eyeballing "The Cost of Progress" chart, it looks like artemisia has been below the $400/kg level for about half the last 10 years.  To be honest, there isn't enough data on the chart to make firm conclusions, but it does look like the most stable price level is around $350/kg, with rapid and large price spikes up to about $1000/kg.  Farmers who time their planting right will probably do well; those who are less lucky will make much less on the crop.  So it goes with all farming, unfortunately, as I am sure Mr. Thomas would agree.

During his talk, Keasling put up a chart I hadn't seen before, which showed predicted farmer revenues for a variety of crops.  The chart is below; it makes clear that farmers will have substantially higher revenues planting crops other than artemisia at prices at or below $400/kg. 
Keasling_Alternate_crops.png
The Strange Arguments Against Microbial Production of Malaria Drugs

Mr. Thomas' response in the Guardian to rational arguments and actual data was a glib accusation that Keasling is dismissing the welfare of farmers with "Let them plant potatoes".  This is actually quite clever and witty, but not funny in the slightest when you look at the numbers.  Thomas worries that farmers in African and Asia will suffer unduly from a shift away from artemisia to yeast.  But here is the problem: those farmers are already suffering -- from malaria.  Digging deeper, it becomes clear that Mr. Thomas is bafflingly joining the pricing cartel in arguing against the farmers' best interests.

A brief examination of the latest world malaria map shows that the most intense malaria hot spots are in Africa and Asia, with South America not far behind (here is the interactive CDC version).  Artemisia is primarily grown in Africa and Asia.  That is, farmers most at risk of contracting malaria only benefit economically when there is a shortage of artemisinin, the risk of which is maintained by leaving artemisia production in the hands of farmers.  Planting sufficient quantities of artemisia to meet demand means prices that are not economically viable for the farmer.  There are some time lags here due to growing and processing the crop into the drug, but the upshot is that the only way farmers make more money planting artemisia than other crops is when there is a shortage.  This is a deadly paradox, and its existence has only one beneficiary: the artemisinin pricing cartel.  But we can now eliminate the paradox.  It is imperative for us to do so.

Once you look at the numbers there is no argument Mr. Thomas, or anyone else, can make that we should do anything but brew artemisinin in vats and bring the price as low as possible.

I had previously made the macro-scale economic arguments about humanitarian impacts economic growth.  Malarious countries, and all the farmers in them, would benefit tremendously by a 1.3% annual increase in GDP.  But I only realized while writing this post that the micro-scale argument gives the same answer: the farmers most at risk from malaria only make money growing artemisia when there is a shortage of the drug, which is when they are most likely to be affected by the disease.

I get along quite well in person with Mr. Thomas, but I have long been baffled by his arguments about artemisinin.  I heartily support his aims of protecting the rights of farmers and taking care of the land.  We should strive to do the right thing, except when analysis reveals it to be the wrong thing.  Since I only just understood the inverse relationship between artemisinin pricing and the availability of the drug to the very farmers growing artemisia, I am certain Mr. Thomas has not had the opportunity to consider the facts and think through the problem so that he might come to the same conclusion.  I invite him to do so.

How Competition Improves DNA Sequencing

| No Comments | No TrackBacks
The technology that enables reading DNA is changing very quickly.  I've chronicled how price and productivity are each improving in a previous post; here I want to try to get at how the diversity of companies and technologies is contributing to that improvement.

As I wrote previously, all hell is breaking loose in sequencing, which is great for the user.  Prices are falling and the capabilities of sequencing instruments are skyrocketing.  From an analytical perspective, the diversity of platforms is a blessing and a curse.  There is a great deal more data than just a few years ago, but it has become quite difficult to directly compare instruments that produce different qualities of DNA sequence, produce different read lengths, and have widely different throughputs.

I have worked for many years to come up with intuitive metrics to aid in understanding how technology is changing.  Price and productivity in reading and writing DNA are pretty straightforward.  My original paper on this topic (PDFalso looked at the various components of determining protein structures, which, given the many different quantifiable tasks involved, turned out to be a nice way to encapsulate a higher level look at rates of change.

In 2007, with the publication of bio-era's Genome Synthesis and Design Futures, I tried to get at how improvements in instrumentation were moving us toward sequencing whole genomes. The two axes of the relevant plot were 1) read length -- the length of each contiguous string of bases read by an instrument, critical to accurate assembly of genomes or chromosomes that can be hundreds of millions of bases long -- and 2) the daily throughput per instrument -- how much total DNA each instrument could read.  If you have enough long reads you can use this information as a map to assemble many shorter reads into the contiguous sequence.

Because there weren't very many models of commercially available sequencers in 2007, the original plot didn't have a lot of data on it (the red squares and blue circles below).  But the plot did show something interesting, which was that two general kinds of instruments were emerging at that time: those that produced long reads but had relatively limited throughput, and those that produced short reads but could process enormous amounts of sequence per day.  The blue dots below were data from my original paper, and the red squares were derived from a Science news article in 2006 that looked at instruments said to be emerging over the next year or so.

I have now pulled performance estimates out of several papers assessing instruments currently on the market and added them to the plot (purple triangles).  The two groupings present in 2007 are still roughly extant, though the edges are blurring a bit. (As with the price and productivity figures, I will publish a full bibliography in a paper later this year.  For now, this blog post serves as the primary citation for the figure below.)

I am still trying to sort out the best way to represent the data (I am open to suggestions about how do it better).  At this point, it is pretty clear that the two major axes are insufficient to truly understand what is going on, so I have attempted to add some information regarding the release schedules of new instruments.  Very roughly, we went from a small number of first generation instruments in 2003 to a few more real instruments in 2006 that performed a little better in some regards, plus a few promised instruments that didn't work out for one reason or another.  However, starting in about 2010, we began to see seriously improved instruments being released on an increasingly rapid schedule.  This improvement is the result of competition not just between firms, but also between technologies.  In addition, some of what we are seeing is the emergence of instruments that have niches; long reads but medium throughput, short reads but extraordinary throughput -- combine these two capabilities and you have the ability to crank out de novo sequences at pretty remarkable rate.  (For reference, the synthetic chromosome Venter et al published a few years ago was about one million bases; human chromosomes are in the range of 60 to 250 million bases.)

Carlson_Seq_Performance_Comp_2012a.png
And now something even more interesting is going on.  Because platforms like PacBio and IonTorrent can upgrade internal components used in the actual sequencing, where those components include hardware, software, and wetware, revisions can result in stunning performance improvements.  Below is a plot with all the same data as above, with the addition of one revision from PacBio.  It's true that the throughput per instrument didn't change so much, but such long read lengths mean you can process less DNA and still rapidly produce high resolution sequence, potentially over megabases (modulo error rates, about which there seems to be some vigorous discussion).  This is not to say that PacBio makes the best overall instrument, nor that the company will be commercially viable, but rather that the competitive environment is producing change at an extraordinary rate.

Carlson_Seq_Performance_Comp_2012b.png
If I now take the same plot as above and add a single (putative) MinION nanopore sequencer from Oxford Nanopore (where I have used their performance claims from public presentations; note the question mark on the date), the world again shifts quite dramatically.  Oxford also claims they will ship GridION instruments that essentially consist of racks of MinIONs, but I have not even tried to guess at the performance of that beast.  The resulting sequencing power will alter the shape of the commercial sequencing landscape.  Illumina and Life are not sitting still, of course, but have their own next generation instruments in development.  Jens Gundlach's (PDF) team at the University of Washington has demonstrated a nanopore that is argued to be better than the one Oxford uses, and I understand commercialization is proceeding rapidly, though of course Oxford won't be sitting still either.

One take home message from this, which is highlighted by taking the time to plot this data, is that over the next few years sequencing will become highly accurate, fast, and commonplace.  With the caveat that it is difficult to predict the future, continued competition will result in continued price decreases.

A more speculative take home emerges if you consider the implications of the MinION.  That device is described as a disposable USB sequencer.  If it -- or anything else like it -- works as promised, then some centralized sequencing operations might soon reach the end of their lives.  There are, of course, different kinds of sequencing operations.  If I read the tea leaves correctly, Illumina just reported that its clinical sequencing operations brought in about as much revenue as their other operations combined, including instrument sales.  That's interesting, because it points to two kinds of revenue: sales of boxes and reagents that enable other people to sequence, and certified service operations that provide clinically relevant sequence data.  At the moment, organizations like BGI appear to be generating revenue by sequencing everything under the sun, but cheaper and cheaper boxes might mean that the BGI operations outside of clinical sequencing aren't cost effective going forward.  Once the razors (electric, disposable, whatever) get cheap enough, you no longer bother going to the barber for a shave.

I will continue to work with the data in an effort to make the plots simpler and therefore hopefully more compelling.
Here are updated cost and productivity curves for DNA sequencing and synthesis.  Reading and writing DNA is becoming ever cheaper and easier.  The Economist and others call these "Carlson Curves", a name I am ambivalent about but have come to accept if only for the good advertising.  I've been meaning to post updates for a few weeks; the appearance today of an opinion piece at Wired about Moore's Law serves as a catalyst to launch them into the world.  In particular, two points need some attention, the  notions that Moore's Law 1) is unplanned and unpredictable, and 2) somehow represents the maximum pace of technological innovation.

DNA Sequencing Productivity is Skyrocketing

First up: the productivity curve.  Readers new to these metrics might want to have a look at my first paper on the subject, "The Pace and Proliferation of Biological Technologies" (PDF) from 2003, which describes why I chose to compare the productivity enabled by commercially available sequencing and synthesis instruments to Moore's Law.  (Briefly, Moore's Law is a proxy for productivity; more transistors putatively means more stuff gets done.)  You have to choose some sort of metric when making comparisons across such widely different technologies, and, however much I hunt around for something better, productivity always emerges at the top.

It's been a few years since I updated this chart.  The primary reason for the delay is that, with the profusion of different sequencing platforms, it became somewhat difficult to compare productivity [bases/person/day] across platforms.  Fortunately, a number of papers have come out recently that either directly make that calculation or provide enough information for me to make an estimate.  (I will publish a full bibliography in a paper later this year.  For now, this blog post serves as the primary citation for the figure below.)

carlson_productivity_feb_2013.png
Visual inspection reveals a number of interesting things.  First, the DNA synthesis productivity line stops in about 2008 because there have been no new instruments released publicly since then.  New synthesis and assembly technologies are under development by at least two firms, which have announced they will run centralized foundries and not sell instruments.  More on this later.

Second, it is clear that DNA sequencing platforms are improving very rapidly, now much faster than Moore's Law.  This is interesting in itself, but I point it out here because of the post today at Wired by Pixar co-founder Alvy Ray Smith, "How Pixar Used Moore's Law to Predict the Future".  Smith suggests that "Moore's Law reflects the top rate at which humans can innovate. If we could proceed faster, we would," and that "Hardly anyone can see across even the next crank of the Moore's Law clock."

Moore's Law is a Business Model and is All About Planning -- Theirs and Yours

As I have written previously, early on at Intel it was recognized that Moore's Law is a business model (see the Pace and Proliferation paper, my book, and in a previous post, "The Origin of Moore's Law").  Moore's Law was always about economics and planning in a multi-billion dollar industry.  When I started writing about all this in 2000, a new chip fab cost about $1 billion.  Now, according to The Economist, Intel estimates a new chip fab costs about $10 billion.  (There is probably another Law to be named here, something about exponential increases in cost of semiconductor processing as an inverse function of feature size.  Update: This turns out to be Rock's Law.)  Nobody spends $10 billion without a great deal of planning, and in particular nobody borrows that much from banks or other financial institutions without demonstrating a long-term plan to pay off the loan.   Moreover, Intel has had to coordinate the manufacturing and delivery of very expensive, very complex semiconductor processing instruments made by other companies.  Thus Intel's planning cycle explicitly extends many years into the future; the company sees not just the next crank of the Moore's Law clock, but several cranks.  New technology has certainly been required to achieve these planning goals, but that is just part of the research, development, and design process for Intel.  What is clear from comments by Carver Mead and others is that even if the path was unclear at times, the industry was confident that they could to get to the next crank of the clock.

Moore's Law served a second purpose for Intel, and one that is less well recognized but arguably more important; Moore's Law was a pace selected to enable Intel to win.  That is why Andy Grove ran around Intel pushing for financial scale (see "The Origin of Moore's Law").  I have more historical work to do here, but it is pretty clear that Intel successfully organized an entire industry to move at a pace only it could survive.  And only Intel did survive.  Yes, there are competitors in specialty chips and in memory or GPUs, but as far as high volume, general CPUs go, Intel is the last man standing.  Finally, and alas I don't have a source anywhere for this other than hearsay, Intel could have in fact gone faster than Moore's Law.  Here is the hearsay: Gordon Moore told Danny Hillis who told me that Intel could have gone faster.  (If anybody has a better source for that particular point, give me a yell on Twitter.)  The inescapable conclusion from all this is that the management of Intel made a very careful calculation.  They evaluated product roll-outs to consumers, the rate of new product adoption, the rate of semiconductor processing improvements, and the financial requirements for building the next chip fab line, and then set a pace that nobody else could match but that left Intel plenty of headroom for future products.  It was all about planning.

The reason I bother to point all this out is that Pixar was able to use Moore's Law to "predict the future" precisely because Intel meticulously planned that future.  (Calling Alan Kay: "The best way to predict the future is to invent it.")  Which brings us back to biology.  Whereas Moore's Law is all about Intel and photolithography, the reason that productivity in DNA sequencing is going through the roof is competition among not just companies but among technologies.  And we only just getting started.  As Smith writes in his Wired piece, Moore's Law tells you that "Everything good about computers gets an order of magnitude better every five years."  Which is great: it enabled other industries and companies to plan in the same way Pixar did.  But Moore's Law doesn't tell you anything about any other technology, because Moore's Law was about building a monopoly atop an extremely narrow technology base.  In contrast, there are many different DNA sequencing technologies emerging because many different entrepreneurs and companies are inventing the future.

The first consequence of all this competition and invention is that it makes my job of predicting the future very difficult.  This emphasizes the difference between Moore's Law and Carlson Curves (it still feels so weird to write my own name like that): whereas Intel and the semiconductor industry were meeting planning goals, I am simply keeping track of data.  There is no real industry-wide planning in DNA synthesis or sequencing, other than a race to get to the "$1000 genome" before the next guy.  (Yes, there is a vague road-mappy thing promoted by the NIH that accompanied some of its grant programs, but there is little if any coordination because there is intense competition.)

Biological Technologies are Hard to Predict in Part Because They Are Cheaper than Chips

Compared to other industries, the barrier to entry in biological technologies is pretty low.  Unlike chip fabs, there is nothing in biology that costs $10 billion commercially, nor even $1 billion.  (I have come to mostly disbelieve pharma industry claims that developing drugs is actually that expensive, but that is another story for another time.)  The Boeing 787 reportedly cost $32 billion to develop as of 2011, and that is on top of a century of multi-billion dollar aviation projects that had to come before the 787.

There are two kinds of costs that are important to distinguish here.  The first is the cost of developing and commercializing a particular product.  Based on the money reportedly raised and spent by Life, Illumina, Ion Torrent (before acquisition), Pacific Biosciences, Complete Genomics (before acquisition), and others, it looks like developing and marketing second-generation sequencing technology can cost upwards of about $100 million.  Even more money gets spent, and lost, in operations before anybody is in the black.  My intuition says that the development costs are probably falling as sequencing starts to rely more on other technology bases, for example semiconductor processing and sensor technology, but I don't know of any real data.  I would also guess that nanopore sequencing, should it actually become a commercial product this year, will have cost less to develop than other technologies, but, again, that is my intuition based on my time in clean rooms and at the wet bench.  I don't think there is great information yet here, so I will suspend discussion for the time being.

The second kind of cost to keep in mind is the use of new technologies to get something done.  Which brings in the cost curve.  Again, the forthcoming paper will contain appropriate references.
carlson_cost per_base_oct_2012.png
The cost per base of DNA sequencing has clearly plummeted lately.  I don't think there is much to be made of the apparent slow-down in the last couple of years.  The NIH version of this plot has more fine grained data, and it also directly compares the cost of sequencing with the cost per megabyte for memory, another form of Moore's Law.  Both my productivity plot above and the NIH plot show that sequencing has at times improved much faster than Moore's Law, and generally no slower.

If you ponder the various wiggles, it may be true that the fall in sequencing cost is returning to a slower pace after a period in which new technologies dramatically changed the market.  Time will tell.  (The wiggles certainly make prediction difficult.)  One feature of the rapid fall in sequencing costs is that it makes the slow-down in synthesis look smaller; see this earlier post for different scale plots and a discussion of the evaporating maximum profit margin for long, double-stranded synthetic DNA (the difference between the orange and yellow lines above).

Whereas competition among companies and technologies is driving down sequencing costs, the lack of competition among synthesis companies has contributed to a stagnation in price decreases.  I've covered this in previous posts (and in this Nature Biotech article), but it boils down to the fact that synthetic DNA has become a commodity produced using relatively old technology.

Where Are We Headed?

Now, after concluding that the structure of the industry makes it hard to prognosticate, I must of course prognosticate.  In DNA sequencing, all hell is breaking loose, and that is great for the user.  Whether instrument developers thrive is another matter entirely.  As usual with start-ups and disruptive technologies, surviving first contact with the market is all about execution.  I'll have an additional post soon on how DNA sequencing performance has changed over the years, and what the launch of nanopore sequencing might mean.

DNA synthesis may also see some change soon.  The industry as it exists today is based on chemistry that is several decades old.  The common implementation of that chemistry has heretofore set a floor on the cost of short and long synthetic DNA, and in particular the cost of synthetic genes.  However, at least two companies are claiming to have technology that facilitates busting through that cost floor by enabling the use of smaller amounts of poorer quality, and thus less expensive, synthetic DNA to build synthetic genes and chromosomes.

Gen9 is already on the market with synthetic genes selling for something like $.07 per base.  I am not aware of published cost estimates for production, other than the CEO claiming it will soon drop by orders of magnitude.  Cambrian Genomics has a related technology and its CEO suggests costs will immediately fall by 5 orders of magnitude.  Of course, neither company is likely to drop prices so far at the beginning, but rather will set prices to undercut existing companies and grab market share.  Assuming Gen9 and Cambrian don't collude on pricing, and assuming the technologies work as they expect, the existence of competition should lead to substantially lower prices on genes and chromosomes within the year.  We will have to see how things actually work in the market.  Finally, Synthetic Genomics has announced it will collaborate with IDT to sell synthetic genes, but as far as I am aware nothing new is actually shipping yet, nor have they announced pricing.

So, supposedly we are soon going to have lots more, lots cheaper DNA.  But you have to ask yourself who is going to use all this DNA, and for what.  The important business point here is that both Gen9 and Cambrian Genomics are working on the hypothesis that demand will increase markedly (by orders of magnitude) as the price falls.  Yet nobody can design a synthetic genetic circuit with more than a handful of components at the moment, which is something of a bottleneck on demand.  Another option is that customers will do less up-front predictive design and instead do more screening of variants.  This is how Amyris works -- despite their other difficulties, Amyris does have a truly impressive metabolic screening operation -- and there are several start-ups planning to provide similar (or even improved) high-throughput screening services for libraries of metabolic pathways.  I infer this is the strategy at Synthetic Genomics as well.  This all may work out well for both customers and DNA synthesis providers.  Again, I think people are working on an implicit hypothesis of radically increased demand, and it would be better to make the hypothesis explicit in part to identify the risk of getting it wrong.  As Naveen Jain says, successful entrepreneurs are good at eliminating risk, and I worry a bit that the new DNA synthesis companies are not paying enough attention on this point.

There are relatively simple scaling calculations that will determine the health of the industry.  Intel knew that it could grow financially in the context of exponentially falling transistor costs by shipping exponentially more transistors every quarter -- that is the business model of Moore's Law.  Customers and developers could plan product capabilities, just as Pixar did, knowing that Moore's Law was likely to hold for years to come.  But that was in the context of an effective pricing monopoly.  The question for synthetic gene companies is whether the market will grow fast enough to provide adequate revenues when prices fall due to competition.  To keep revenues up, they will then have to ship lots of bases, probably orders of magnitudes more bases.  If prices don't fall, then something screwy is happening.  If prices do fall, they are likely to fall quickly as companies battle for market share.  It seems like another inevitable race to the bottom.  Probably good for the consumer; probably bad for the producer.

(Updated)  Ultimately, for a new wave of DNA synthesis companies to be successful, they have to provide the customer something of value.  I suspect there will be plenty of academic customers for cheaper genes.  However, I am not so sure about commercial uptake.  Here's why: DNA is always going to be a small cost of developing a product, and it isn't obvious making that small cost even cheaper helps your average corporate lab.

In general, the R part of R&D only accounts for 1-10% of the cost of the final product.  The vast majority of development costs are in polishing up the product into something customers will actually buy.  If those costs are in the neighborhood of $50-100 million, the reducing the cost of synthetic DNA from $50,000 to $500 is nice, but the corporate scientist-customer is more worried about knocking a factor of two, or an order of magnitude, off the $50 million.  This means that in order to make a big impact (and presumably to increase demand adequately) radically cheaper DNA must be coupled to innovations that reduce the rest of the product development costs.  As suggested above, forward design of complex circuits is not going to be adequate innovation any time soon.  The way out here may be high-throughput screening operations that enable testing many variant pathways simultaneously.  But note that this is not just another hypothesis about how the immediate future of engineering biology will change, but another unacknowledged hypothesis.  It might turn out to be wrong.

The upshot, just as I wrote in 2003, is that the market dynamics of biological technologies will  remain difficult to predict precisely because of the diversity of technology and the difficulty of the tasks at hand.  We can plan on prices going down; how much, I wouldn't want to predict.

Archives