Results tagged “Carlson Curves”
How It Works: Directly Reading DNA
The basic idea is not new: as a long string of DNA pass through a small hole, its components -- the bases A, T, G, and C -- plug that hole to varying degrees. As they pass through the hole, in this case an engineered pore protein derived from one found in nature, each base has slightly different interactions with the walls of the pore. As a result, while passing through the pore each base lets different numbers of salt ions through, which allows one to distinguish between the bases by measuring changes in electrical current. Because this method is a direct physical interrogation of the chemical structure of each base, it is in principal much, much faster than any of the indirect sequencing technologies that have come before.
There have been a variety of hurdles to clear to get nanopore sequencing working. First you have to use a pore that is small enough to produce measurable changes in current. Next the speed of the DNA must be carefully controlled so that the signal to noise ratio is high enough. The pore must also sit in an insulating membrane of some sort, surrounded by the necessary electrical circuitry, and to become a useful product the whole thing must be easily assembled in an industrial manner and be mechanically stable through shipping and use.
Oxford Nanopore claims to have solved all those problems. They recently showed off a disposable version of their technology -- called the MinIon -- containing 512 pores built into a disposable USB stick. This puts to shame the Lava Amp, my own experiment with building a USB peripheral for molecular biology. Here is one part I find extremely impressive -- so impressive it is almost hard to believe: Oxford claims they have reduced the sample handling to single (?) pipetting step. Clive Brown, Oxford CTO, says "Your fluidics is a Gilson." (A "Gilson" would be a brand of pipetter.) That would be quite something.
I've spent a good deal of my career trying to develop simple ways of putting biological samples into microfluidic doo-dads of one kind or another. It's never trivial, it's usually a pain in the ass, and sometimes it's a showstopper. Blood, in particular, is very hard to work with. If Oxford has made this part of the operation simple, then they have a winning technology just based on everyday ease of use -- what sometimes goes by the labels of "user experience" or "human factors". Compared to the complexity of many other laboratory protocols, it would be like suddenly switching from MS DOS to OS X in one step.
How Well Does it Work?
The challenge for fast sequencing is to combine throughput (bases per hour) with read length (the number of contiguous bases read in one go). Existing instruments have throughputs in the range of 10-55,000 megabases/day and read lengths from tens of bases to about 800 bases. (See chart below.) Nick Loman reports that using the MinIon Oxford has already run DNA of 5000 to 100,000 bases (5 kB to 100 kB) at speeds of 120-1000 bases per minute per pore, though accuracy suffers above 500 bases per minute. So a single USB stick can run easily run at 150 megabases (MB) per hour, which basically means you can sequence full-length eukaryotic chromosomes in about an hour. Over the next year or so, Oxford will release the GridIon instrument that will have 4 and then 16 times as many pores. Presumably that means it will be 16 times as fast. The long read lengths mean that processing the resulting sequence data, which usually takes longer than the actual sequencing itself, will be much, much faster.
This is so far beyond existing commercial instruments that it sounds like magic. Writing in Forbes, Matthew Herper quotes Jonathan Rothberg, of sequencing competitor Ion Torrent, as saying "With no data release how do you know this is not cold fusion? ... I don't believe it." Oxford CTO Clive Brown responded to Rothberg in the comments to Herper's post in a very reasonable fashion -- have a look.
Of course I want to see data as much as the next fellow, and I will have to hold one of those USB sequencers in my own hands before I truly believe it. Rothberg would probably complain that I have already put Oxford on the "performance tradeoffs" chart before they've shipped any instruments. But given what I know about building instruments, I think immediately putting Oxford in the same bin as cold fusion is unnecessary.
Below is a performance comparison of sequencing instruments originally published by Bio-era in Genome Synthesis and Design Futures in 2007. (Click on it for a bigger version.) I've hacked it up to include the approximate performance range of 2nd generation sequencers from Life, Illumina, etc, as well for a single MinIon. That's one USB stick, with what we're told is a few minutes worth of sample prep. How many can you run at once? Notice the scale on the x-axis, and the units on the y-axis. If it works as promised, the MinIon is so vastly better than existing machines that the comparison is hard to make. If I replotted that data with log axis along the bottom then all the other technologies would be cramped up together way off to the left. (The data comes from my 2003 paper, The Pace and Proliferation of Biological Technologies (PDF), and from Service, 2006, The Race for the $1000 Genome).
The Broader Impact
Later this week I will try to add the new technologies to the productivity curve published in the 2003 paper. Here's what it will show: biological technologies are improving at exceptional paces, leaving Moore's Law behind. This is no surprise, because while biology is getting cheaper and faster, the density of transistors on chips is set by very long term trends in finance and by SEMATECH; designing and fabricating new semiconductors is crazy expensive and requires coordination across an entire industry. (See The Origin of Moore's Law and What it May (Not) Teach Us About Biological Technologies.) In fact, we should expect biology to move much faster than semiconductors.
Here are a few graphs from the 2003 paper:
...The long term distribution and development of biological technology is likely to be largely unconstrained by economic considerations. While Moore's Law is a forecast based on understandable large capital costs and projected improvements in existing technologies, which to a great extent determined its remarkably constant behavior, current progress in biology is exemplified by successive shifts to new technologies. These technologies share the common scientific inheritance of molecular biology, but in general their implementations as tools emerge independently and have independent scientific and economic impacts. For example, the advent of gene expression chips spawned a new industrial segment with significant market value. Recombinant DNA, gel and capillary sequencing, and monoclonal antibodies have produced similar results. And while the cost of chip fabs has reached upwards of one billion dollars per facility and is expected to increase [2012 update: it's now north of $6 billion], there is good reason to expect that the cost of biological manufacturing and sequencing will only decrease. [Update 2012: See "New Cost Curves" for DNA synthesis and sequencing.]Cue nanopore sequencing.
These trends--successive shifts to new technologies and increased capability at decreased cost--are likely to continue. In the fifteen years that commercial sequencers have been available, the technology has progressed ... from labor intensive gel slab based instruments, through highly automated capillary electrophoresis based machines, to the partially enzymatic Pyrosequencing process. These techniques are based on chemical analysis of many copies of a given sequence. New technologies under development are aimed at directly reading one copy at a time by directly measuring physical properties of molecules, with a goal of rapidly reading genomes of individual cells. While physically-based sequencing techniques have historically faced technical difficulties inherent in working with individual molecules, an expanding variety of measurement techniques applied to biological systems will likely yield methods capable of rapid direct sequencing.
A few months ago I tweeted that I had seen single strand DNA sequence data generated using a nanopore -- it wasn't from Oxford. (Drat, can't find the tweet now.) I am certain there are other labs out there making similar progress. On the commercial front, Illumina is an investor in Oxford, and Life has invested in Genia. As best I can tell, once you get past the original pore sequencing IP, which it appears is being licensed broadly, there appear to be many measurement approaches, many pores, and many membranes that could be integrated into a device. In other words, money and time will be the primary barriers to entry.
(For the instrumentation geeks out there, because the pore is larger than a single base, the instrument actually measures the current as three bases pass through the pore. Thus you need to be able to distinguish 4^3=64 levels of current, which Oxford claims they can do. The pore set-up I saw in person worked the same way, so I certainly believe this is feasible. Better pores and better electronics might reduce the physical sampling to 1 or 2 bases eventually, which should result in faster instruments.)
It may be that Oxford will have a first mover advantage for nanopore instruments, and it may be that they have amassed sufficient additional IP to make it rough for competitors. But, given the power of the technology, the size of the market, and the number of academic competitors, I can't see that over the long term this remains a one-company game.
Not every sequencing task has the same technical requirements, so instruments like the Ion Torrent won't be put to the curbside. And other technologies will undoubtedly come along that perform better in some crucial way than Oxford's nanopores. We really are just at the beginning of the revolution in biological technologies. Recombinant DNA isn't even 40 years old, and the electronics necessary for nanopore measurements only became inexpensive and commonplace in the last few years. However impressive nanopore sequencing seems today, the greatest change is yet to come.
As with the last time I was invited to be a "guest speaker" (just one of the oddities of horning an Oxford-style debate into an online shoe), I have difficulty coloring between the lines. Here are the first couple of graphs of today's contribution:
The development of computing--broadly construed--was indeed the most significant technological advance of the 20th century. New technologies, however, never crop up by themselves, but are instead part of the woven web of human endeavour. There is always more to a given technology than meets the eye.I go on to observe that computation is already having an effect on food through increased corn yields courtesy of gene sequencing and expression analysis.
We often oversimplify "computing" and think only of software or algorithms used to manipulate information. That information comes in units of bits, and our ability to store and crunch those bits has certainly changed our economies and societies over the past century. But those bits reside on a disk, or in a memory circuit, and the crunching of bits is done by silicon chips. Those disks, circuits and chips had to improve so that computing could advance.
Progress in building computers during the mid-20th century required first an understanding of materials and how they interact; from this knowledge, which initially lived on paper and in the minds of scientists and engineers, were built the first computer chips. As those chips increased in complexity, so did the computational power they conferred on computer designers. That computational power was used to design more powerful chips, creating a feedback loop. By the end of the century, new chips and software packages could only be designed using computers, and their complex behaviour could only be understood with the aid of computers.
The development of computing, therefore, required not just development of software but also of the ability to build the physical infrastructure that runs software and stores information. In other words, our improving ability to control atoms in the service of building computers was crucial to advancing the technology we call "computing". Advances in controlling atoms have naturally been extended to other areas of human enterprise. Computer-aided design and manufacturing have radically changed our ability to transform ideas into objects. Our manufactured world--which includes cars, aircraft, medicines, food, music, phones and even shoes--now arrives at our doorsteps as a consequence of this increase in computational power.
Click through to read the rest.
The nuts and bolts (or bases and methylases?) of the story are this: Gibson et al ordered a whole mess of pieces of relatively short, synthetic DNA from Blue Heron and stitched that DNA together into full length genome for Bug B, which they then transplanted into a related microbial species, Bug A. The transplanted genome B was shown to be fully functional and to change the species from old to new, from A to B. Cool.
Yet, my general reaction to this is the same as it was the last time the Venter team claimed they were creating artificial life. (How many times can one make this claim?) The assembly and boot-up are really fantastic technical achievements. (If only we all had the reported $40 million to throw at a project like this.) But creating life, and the even the claim of creating a "synthetic cell"? Meh.
(See my earlier posts, "Publication of the Venter Institute's synthetic bacterial chromosome", January 2008, and "Updated Longest Synthetic DNA Plot ", December 2007.)
I am going to agree with my friends at The Economist (see main story) that the announcement is "not unexpected", and disagree strongly that "The announcement is momentous." DNA is DNA. We have known that for, oh, a long time now. Synthetic DNA that is biologically indistinguishable from "natural DNA" is, well, biologically indistinguishable from natural DNA. This result is at least thirty years old, when synthetic DNA was first used to cause an organism to do something new. There are plenty of other people saying this in print, so I won't belabor the point; see, for example, the comments in the NYT article.
One less-than-interesting outcome of this paper is that we are once again going to read all about the death of vitalism (see the Nature opinion pieces). Here are the first two paragraphs from Chapter 4 of my book:
"I must tell you that I can prepare urea without requiring a kidney of an animal, either man or dog." With these words, in 1828 Friedrich Wöhler claimed he had irreversibly changed the world. In a letter to his former teacher Joens Jacob Berzelius, Wöhler wrote that he had witnessed "the great tragedy of science, the slaying of a beautiful hypothesis by an ugly fact." The beautiful idea to which he referred was vitalism, the notion that organic matter, exempliﬁed in this case by urea, was animated and created by a vital force and that it could not be synthesized from inorganic components. The ugly fact was a dish of urea crystals on his laboratory bench, produced by heating inorganic salts. Thus, many textbooks announce, was born the ﬁeld of synthetic organic chemistry.Care to guess where the nucleotides came from that went into the Gibson et al synthetic genome? Probably purified and reprocessed from sugarcane. Less probably salmon sperm. In other words, the nucleotides came from living systems, and are thus tainted for those who care about such things. So much for another nail in the vital coffin.
As is often the case, however, events were somewhat more complicated than the textbook story. Wöhler had used salts prepared from tannery wastes, which adherents to vitalism claimed contaminated his reaction with a vital component. Wöhler's achievement took many years to permeate the mind-set of the day, and nearly two decades passed before a student of his, Hermann Kolbe, ﬁrst used the word "synthesis" in a paper to describe a set of reactions that produced acetic acid from its inorganic elements.
Somewhat more intriguing will be the debate around whether it is the atoms in the genome that are interesting or instead the information conveyed by the arrangement of those atoms that we should care about. Clearly, if nothing else this paper demonstrates that the informational code determines species. This isn't really news to anyone who has thought about it (except, perhaps, to IP lawyers -- see my recent post on the breast cancer gene lawsuit) but it might get a broader range of people thinking more about life as information. What then, does "creating life" mean? Creating information? Creating sequence? And what sort of design tools do we need to truly control these creations? Are we just talking about much better computer simulations, or is there more physics to learn, or is it all just too complicated? Will we be forever chasing away ghosts of vitalism?
That's all I have for deep meaning at the moment. I've hardly just got off one set of airplanes (New York-DC-LA) and have to get on another for Brazil in the morning.
I would, however, point out that the recent paper describes what may be a species-specific processing hack. From the paper:
...Initial attempts to extract the M. mycoides genome from yeast and transplant it into M. capricolum failed. We discovered that the donor and recipient mycoplasmas share a common restriction system. The donor genome was methylated in the native M. mycoides cells and was therefore protected against restriction during the transplantation from a native donor cell. However, the bacterial genomes grown in yeast are unmethylated and so are not protected from the single restriction system of the recipient cell. We were able to overcome this restriction barrier by methylating the donor DNA with purified methylases or crude M. mycoides or M. capricolum extracts, or by simply disrupting the recipient cell's restriction system.This methylation trick will probably -- probably -- work just fine for other microbes, but I just want to point out that it isn't necessarily generalizable and that the JVCI team didn't demonstrate any such thing. The team got this one bug working, and who knows what surprises wait in store for the next team working on the next bug.
Since Gibson et al have in fact built an impressive bit of DNA, here is an updated "Longest Synthetic DNA Plot" (here is the previous version with refs.); alas, the one I published just a few months ago in Nature Biotech is already obsolete (hmph, they have evidently now stuck it behind a pay wall).
A couple of thoughts: As I noted in DNA Synthesis "Learning Curve": Thoughts on the Future of Building Genes and Organisms (July 2008), it isn't really clear to me that this game can go on for much longer. Once you hit a MegaBase (1,000,000 bases, or 1 MB) in length, you are basically at a medium-long microbial genome. Another order of magnitude or so gets you to eukaryotic chromosomes, and why would anyone bother building a contiguous chuck of DNA longer than that? Eventually you get into all the same problems that the artificial chromosome community has been dealing with for decades -- namely that chromatin structure is complex and nobody really knows how to build something like it from scratch. There is progress, yes, and as soon as we get a real mammalian artificial chromosome all sorts of interesting therapies should become possible (note to self: dig into the state of the art here -- it has been a few years since I looked into artificial chromosomes). But with the 1 MB milestone I suspect people will begin to look elsewhere and the typical technology development S-curve kicks in. Maybe the curve has already started to roll over, as I predicted (sketched in) with the Learning Curve.
Finally, I have to point out that the ~1000 genes in the synthetic genome are vastly more than anybody knows how to deal with in a design framework. I doubt very much that the JCVI team, or the team at Synthetic Genomics, will be using this or any other genome in any economically interesting bug any time soon. As I note in Chapter 8 of Biology is Technology, Jay Keasling's lab and the folks at Amyris are playing with only about 15 genes. And getting the isoprenoid pathway working (small by the Gibson et al standard but big by the everyone-else standard) took tens of person years and about as much investment (roughly ~$50 million in total by the Gates Foundation and investors) as Venter spent on synthetic DNA alone. And then is Synthetic Genomics going to start doing metabolic engineering in a microbe that they only just sequenced and about which relatively little is known (at least compared with E. coli, yeast, and other favorite lab animals)? Or they are going to redo this same genome synthesis project in a bug that is better understood and will serve as a platform or chassis? Either way, really? The company has hundreds of millions of dollars in the bank to spend on this sort of thing, but I simply don't understand what the present publication has to do with making any money.
So, in summary: very cool big chuck of synthetic DNA being used to run a cell. Not artificial life, and neither artificial cell nor synthetic cell. Probably not going to show up in a product, or be used to make a product, for many years. If ever. Confusing from the standpoint of project management, profit, and economic viability.
But I rather hope somebody proves me wrong about that and surprises me soon with something large, synthetic, and valuable. That way lies truly world changing biological technologies.
Total synthesis of a gene
Science 16 February 1979:
Vol. 203. no. 4381, pp. 614 - 625
A totally synthetic plasmid for general cloning, gene expression and mutagenesis in Escherichia coli
Wlodek Mandecki, Mark A. Hayden, Mary Ann Shallcross and Elizabeth Stotland
Gene Volume 94, Issue 1, 28 September 1990, Pages 103-107
Single-step assembly of a gene and entire plasmid from large numbers of oligodeoxyribonucleotides
Willem P. C. Stemmer, Andreas Crameria, Kim D. Hab, Thomas M. Brennanb and Herbert L. Heynekerb
Gene Volume 164, Issue 1, 16 October 1995, Pages 49-53
Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template
Jeronimo Cello, Aniko V. Paul, Eckard Wimmer
Science 9 August 2002: Vol. 297. no. 5583, pp. 1016 - 1018
Accurate multiplex gene synthesis from programmable DNA microchips
Jingdong Tian, Hui Gong, Nijing Sheng, Xiaochuan Zhou, Erdogan Gulari, Xiaolian Gao & George Church
Nature 432, 1050-1054 (23 December 2004)
Total synthesis of long DNA sequences: Synthesis of a contiguous 32-kb polyketide synthase gene cluster
Sarah J. Kodumal, Kedar G. Patel, Ralph Reid, Hugo G. Menzella, Mark Welch, and Daniel V. Santi
PNAS November 2, 2004 vol. 101 no. 44 15573-15578
Complete Chemical Synthesis, Assembly, and Cloning of a Mycoplasma genitalium Genome
Daniel G. Gibson, Gwynedd A. Benders, Cynthia Andrews-Pfannkoch, Evgeniya A. Denisova, Holly Baden-Tillson, Jayshree Zaveri, Timothy B. Stockwell, Anushka Brownley, David W. Thomas, Mikkel A. Algire, Chuck Merryman, Lei Young, Vladimir N. Noskov, John I. Glass, J. Craig Venter, Clyde A. Hutchison, III, Hamilton O. Smith
Science 29 February 2008: Vol. 319. no. 5867, pp. 1215 - 1220
What is Moore's Law?
First up is a 2003 article from Ars Technica that does a very nice job of explaining the why's and wherefore's: "Understanding Moore's Law". The crispest statement within the original 1965 paper is "The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year." At it's very origins, Moore's Law emerged from a statement about cost, and economics, rather than strictly about technology.
I like this summary from the Ars Technica piece quite a lot:
Ultimately, the number of transistors per chip that makes up the low point of any year's curve is a combination of a few major factors (in order of decreasing impact):In other words, it's complicated. Notably, the article does not touch on any market-associated factors, such as demand and the financing of new fabs.
- The maximum number of transistors per square inch, (or, alternately put, the size of the smallest transistor that our equipment can etch),
- The size of the wafer
- The average number of defects per square inch,
- The costs associated with producing multiple components (i.e. packaging costs, the costs of integrating multiple components onto a PCB, etc.)
The Wiki on Moore's Law has some good information, but isn't very nuanced.
Next, here an excerpt from an interview Moore did with Charlie Rose in 2005:
Charlie Rose: ...It is said, and tell me if it's right, that this was part of the assumptions built into the way Intel made it's projections. And therefore, because Intel did that, everybody else in the Silicon Valley, everybody else in the business did the same thing. So it achieved a power that was pervasive.Keeping up with 'the Law' is as much about the business model of the semiconductor industry as about anything else. Growth for the sake of growth is an axiom of western capitalism, but it is actually a fundamental requirement for chipmakers. Because the cost per transistor is expected to fall exponentially over time, you have to produce exponentially more transistors to maintain your margins and satisfy your investors. Therefore, Intel set growth as a primary goal early on. Everyone else had to follow, or be left by the wayside. The following is from the recent Briefing in The Economist on the semiconductor industry:
Gordon Moore: That's true. It happened fairly gradually. It was generally recognized that these things were growing exponentially like that. Even the Semiconductor Industry Association put out a roadmap for the technology for the industry that took into account these exponential growths to see what research had to be done to make sure we could stay on that curve. So it's kind of become a self-fulfilling prophecy.
Semiconductor technology has the peculiar characteristic that the next generation always makes things higher performance and cheaper - both. So if you're a generation behind the leading edge technology, you have both a cost disadvantage and a performance disadvantage. So it's a very non-competitive situation. So the companies all recognize they have to stay on this curve or get a little ahead of it.
...Even the biggest chipmakers must keep expanding. Intel today accounts for 82% of global microprocessor revenue and has annual revenues of $37.6 billion because it understood this long ago. In the early 1980s, when Intel was a $700m company--pretty big for the time--Andy Grove, once Intel's boss, notorious for his paranoia, was not satisfied. "He would run around and tell everybody that we have to get to $1 billion," recalls Andy Bryant, the firm's chief administrative officer. "He knew that you had to have a certain size to stay in business."AMD got out of the atoms business earlier this year by selling its fab operations to a sovereign wealth fund run by Abu Dhabi. We shall see how they fare as a bits-only design firm, having sacrificed their ability to themselves push (and rely on) scale.
Grow, grow, grow
Intel still appears to stick to this mantra, and is using the crisis to outgrow its competitors. In February Paul Otellini, its chief executive, said it would speed up plans to move many of its fabs to a new, 32-nanometre process at a cost of $7 billion over the next two years. This, he said, would preserve about 7,000 high-wage jobs in America. The investment (as well as Nehalem, Intel's new superfast chip for servers, which was released on March 30th) will also make life even harder for AMD, Intel's biggest remaining rival in the market for PC-type processors.
Where is Moore's Law Taking Us?
Here are a few other tidbits I found interesting:
Re the oft-forecast end of Moore's Law, here is Michael Kanellos at CNET grinning through his prose: "In a bit of magazine performance art, Red Herring ran a cover story on the death of Moore's Law in February--and subsequently went out of business."
And here is somebody's term paper (no disrespect there -- it is actually quite good, and is archived at Microsoft Research) quoting an interview with Carver Mead:
Carver Mead (now Gordon and Betty Moore Professor of Engineering and Applied Science at Caltech) states that Moore's Law "is really about people's belief system, it's not a law of physics, it's about human belief, and when people believe in something, they'll put energy behind it to make it come to pass." Mead offers a retrospective, yet philosophical explanation of how Moore's Law has been reinforced within the semiconductor community through "living it":So the actual pace of Moore's Law is about expectations, human behavior, and, not least, economics, but has relatively little to do with the cutting edge of technology or with technological limits. Moore's Law as encapsulated by The Economist is about the scale necessary to stay alive in the semiconductor manufacturing business. To bring this back to biological technologies, what does Moore's Law teach us about playing with DNA and proteins? Peeling back the veneer of technological determinism enables us (forces us?) to examine how we got where we are today.After it's [Moore's Law] happened long enough, people begin to talk about it in retrospect, and in retrospect it's really a curve that goes through some points and so it looks like a physical law and people talk about it that way. But actually if you're living it, which I am, then it doesn't feel like a physical law. It's really a thing about human activity, it's about vision, it's about what you're allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe what is possible. So here's an example where Gordon [Moore], when he made this observation early on, he really gave us permission to believe that it would keep going. And so some of us went off and did some calculations about it and said, 'Yes, it can keep going'. And that then gave other people permission to believe it could keep going. And [after believing it] for the last two or three generations, 'maybe I can believe it for a couple more, even though I can't see how to get there'. . . The wonderful thing about [Moore's Law] is that it is not a static law, it forces everyone to live in a dynamic, evolving world.
A Few Meandering Thoughts About Biology
Intel makes chips because customers buy chips. According to The Economist, a new chip fab now costs north of $6 billion. Similarly, companies make stuff out of, and using, biology because people buy that stuff. But nothing in biology, and certainly not a manufacturing plant, costs $6 billion.
Even a blockbuster drug, which could bring revenues in the range of $50-100 billion during its commercial lifetime, costs less than $1 billion to develop. Scale wins in drug manufacturing because drugs require lots of testing, and require verifiable quality control during manufacturing, which costs serious money.
Scale wins in farming because you need...a farm. Okay, that one is pretty obvious. Commodities have low margins, and unless you can hitch your wagon to "eat local" or "organic" labels, you need scale (volume) to compete and survive.
But otherwise, it isn't obvious that there are substantial barriers to participating in the bio-economy. Recalling that this is a hypothesis rather than an assertion, I'll venture back into biofuels to make more progress here.
Scale wins in the oil business because petroleum costs serious money to extract from the ground, because the costs of transporting that oil are reduced by playing a surface-to-volume game, and because thermodynamics dictates that big refineries are more efficient refineries. It's all about "steel in the ground", as the oil executives say -- and in the deserts of the Middle East, and in the Straights of Malacca, etc. But here is something interesting to ponder: oil production may have maxed out at about 90 million barrels a day (see this 2007 article in the FT, "Total chief warns on oil output"). There may be lots of oil in the ground around the world, but our ability to move it to market may be limited. Last year's report from Bio-era, "The Big Squeeze", observed that since about 2006, the petroleum market has in fact relied on biofuels to supply volumes above the ~90 million per day mark. This leads to an important consequence for distributed biofuel production that only recently penetrated my thick skull.
Below the 90 million barrel threshold, oil prices fall because supply will generally exceed demand (modulo games played by OPEC, Hugo Chavez, and speculators). In that environment, biofuels have to compete against the scale of the petroleum markets, and margins on biofuels get squeezed as the price of oil falls. However, above the 90 million per day threshold, prices start to rise rapidly (perhaps contributing to the recent spike, in addition to the actions of speculators). In that environment, biofuels are competing not with petroleum, but with other biofuels. What I mean is that large-scale biofuels operations may have an advantage when oil prices are low because large-scale producers -- particularly those making first-generation biofuels, like corn-based ethanol, that require lots of energy input -- can eke out a bit more margin through surface to volume issues and thermodynamics. But as prices rise, both the energy to make those fuels and the energy to move those fuels to market get more expensive. When the price of oil is high, smaller scale producers -- particularly those with lower capital requirements, as might come with direct production of fuels in microbes -- gain an advantage because they can be more flexible and have lower transportation costs (being closer to the consumer). In this price-volume regime, petroleum production is maxed out and small scale biofuels producers are competing against other biofuels producers since they are the only source of additional supply (for materials, as well as fuels).
This is getting a bit far from Moore's Law -- the section heading does contain the phrase "meandering thoughts" -- I'll try to bring it back. Whatever the origin of the trends, biological technologies appear to be the same sort of exponential driver for the economy as are semiconductors. Chips, software, DNA sequencing and synthesis: all are infrastructure that contribute to increases in productivity and capability further along the value chain in the economy. The cost of production for chips (especially the capital required for a fab) is rising. The cost of production for biology is falling (even if that progress is uneven, as I observed in the post about Codon Devices). It is generally becoming harder to participate in the chip business, and it is generally becoming easier to participate in the biology business. Paraphrasing Carver Mead, Moore's Law became an organizing principal of an industry, and a driver of our economy, through human behavior rather than through technological predestination. Biology, too, will only become a truly powerful and influential technology through human choices to develop and deploy that technology. But access to both design tools and working systems will be much more distributed in biology than in hardware. It is another matter whether we can learn to use synthetic biological systems to improve the human condition to the extent we have through relying on Moore's Law.
While at iGEM this past weekend, I learned that GeneArt is now charging $.55 per base for ~1 kB synthesis jobs, with delivery within 10 days.
Here is an interesting tidbit: They only charged iGEM teams $.20 per base. Anybody have any idea whether this represents their internal cost, and how much margin this might include?
Here is an updated plot for synthesis and sequencing cost. No new data, just a new rendering.
(Update: 12 November, 2008. There is a news piece in last week's Nature that claims Illumina's Genome Analyzer (GA1) was just used to sequence a whole genome in 8 weeks for $250K. However, the paper describing that sequencing efforts says:
We generated 135 Gb of sequence (4 billion paired 35-base reads) over a period of 8 weeks (December 2007 to January 2008) on six GA1 instruments averaging 3.3 Gb per production run. The approximate consumables cost (based on full list price of reagents) was $250,000.
Thus the price does not include labor, and is not a true commercial cost (labor is only truly free for professors).
I am therefore not sure if/how this price can be compared to the prices in the figure below.
Update 2: I fixed the significant figure issue with the cost axis. Alas, Open Office does not give great control over the appearance of the digits.)
With experience comes skill and efficiency. That is the theory behind "learning" or "experience curves", which I played around with last week for DNA sequencing. As promised, here are a few thoughts on the future of DNA synthesis. Playing around with the synthesis curves a bit seems to kick out a couple of quantitative metrics for technological change.
For everything below, clicking on a Figure launches a pop-up with a full sized .jpg. The data come from my papers, the Bio-era "Genome Synthesis and Design Futures" report, and a couple of my blog posts over the last year.
The simplest application of a learning curve to DNA synthesis is to compare productivity with cost. Figure 1 shows those curves for both oligo synthesis and gene synthesis (click on the figure for a larger pop-up). These lines are generated by taking the ratios of fits to data (shown in the inset). This is necessary due to the methodological annoyance that productivity and cost data do not overlap -- the fits allow comparison of trends even when data is missing from one set or another. As before, 1) I am not really thrilled to rely on power law fits to a small number of points, and 2) the projections (dashed lines) are really just for the sake of asking "what if?".
What can we learn from the figure? First, the two lines cover different periods of time. Thus it isn't completely kosher to compare them directly. But with that in mind, we come to the second point: even the simple cost data in the inset makes clear that the commercial cost of synthetic genes is rapidly approaching the cost of the constituent single-stranded oligos. This is the result of competition, and is almost certainly due to new technologies introduced by those competitors.
Assuming that commercial gene foundries are making money, the "Assembly Cost" is probably falling because of increased automation and other gains in efficiency. But it can't fall to zero, and there will (probably?) always be some profit margin for genes over oligos. I am not going to guess at how low the Assembly Cost can fall, and the projections are drawn in by hand just for illustration.
It isn't clear that a couple of straight lines in Figure 1 teach us much about the future, except in pondering the shrinking margins of gene foundries. But combining the productivity information with my "Longest Synthetic DNA" plot gives a little more to chew on. Figure 2 is a ratio of a curve fitted to the longest published synthetic DNA (sDNA) to the productivity curve.
In what follows, remember that the green line is based on data.
First, the caveat: the fit to the longest sDNA is basically a hand hack. On a semilog plot I fit a curve consisting of a logarithm and a power law (not shown). That means the actual functional form (on the original data) is a linear term plus a super power law in which the exponent increases with time. There isn't any rationale for this function other than it fits the crazy data (in the inset), and I would be oh-so-wary of inferring anything deep from it. Perhaps one could make the somewhat trivial observation that for a long time synthesizing DNA was hard (the linear regime), and then we entered a period when it has become progressively easier (the super power law). I should probably win a prize for that. No? A lollipop?
There are a couple of interesting things about this curve, along which distance represents "progress". First, so far as I am aware, commercial oligo synthesis started in 1992 and commercial gene foundries starting showing up in 1999. The distance along the curve in those seven years is quite short, while the distance over the next nine years to the Venter Institute's recent synthetic chromosome is substantially larger.
This change in distance/speed represents some sort of quantitative measure of accelerating progress in synthesizing genomes, though frankly I am not yet settled on what the proper metric should be. That is, how exactly should one measure distance or speed along this curve? And then, given proper caution about the utility of the underlying fits to data, how seriously should one trust the metric? Maybe it is just fine as is. I am still pondering this.
Next, while the "learning curve" is presently "concave up", it really ought to turn over and level off sometime soon. As I argued in the post on the Venter Institute's fine technical achievement, they are already well beyond what will be economically interesting for the foreseeable future, which is probably only 10-50 kilobases (kB). It isn't at all clear that assembling sDNA larger than 100 kB will be anything more than an academic demonstration. The red octagon (hint!) is positioned at about 100 MB, which is in the range of a human chromosome. Even assembling something that large, and then using it to fabricate an artificial human chromosome, is probably not technologically that useful. I reserve a bit of judgement here in the event it turns out that actually building functioning human chromosomes from smaller pieces is problematic. But really, why bother otherwise?
Next, with the other curves in hand I couldn't help but compare the longest sDNA to gene assembly cost (beware the products of actual free time!). (Update: Can't recall what I meant by this next sentence, so I struck it out.)
The assembly cost (inset) was generated simply by subtracting the oligo cost curve from the gene cost curve (see Figure 1 above) -- yes, I ignored the fact that those data are over different time periods. There is no cost information available for any of the longest sDNA data, which all come from academic papers. But the fact that gene assembly cost has been consistently halving every 18 months or so just serves to emphasize that the "acceleration" in the ratio of sDNA to assembly cost results from real improvements in processes and automation used to fabricate long sDNA. I don't know that this is that deep an observation, but it does go some way towards providing additional quantitative estimates of progress in developing biological technologies.
I have been wondering what additional information about future technology and markets can be discerned from trends in genome synthesis and sequencing ("Carlson Curves"). To see if there is anything there, I have been playing around with applying the idea of "learning curves" (also called "experience curves") to data on cost and productivity.
Learning curves generally are used to estimate decreases in costs that result from efficiencies that come from increases in production. The more you make of something, the more efficient you become. T.P. Wright famously used this idea in the 1930s to project decreases in cost as a function of increased airplane production. The effect also shows up in a reduction of the cost of photovoltaic power as a function of cumulative production (see this figure, for example).
To start with here are some musings about the future of sequencing and the thousand dollar genome:
Figure 1 was generated using data on sequencing cost and productivity using commercially available instruments (click on the image for a larger pop-up). I am not yet sure how seriously to take the plot, but it is interesting to think about the implications.
A few words on methodology: the data is sparse (see inset) in that there are not many points and data is not readily available in each category for each year. This makes generating the plot of cost vs. productivity subject to estimation and some guesswork. In particular, fitting a power law to the five productivity points, which are spread over only three logs, makes me uneasy. The cost data isn't much better. In the past I have cautioned both the private sector and governments from attempting to use this data to forecast trends. But, really, everyone else is doing it, so why should I let good sense stop me?
Before going on, I should note that sequencing cost and productivity are related but not strictly correlated. They are mostly independent variables at this point in time. Reagents account for a substantial fraction of current sequencing costs, and increasing throughput and automation do not necessarily affect anything other than the number of bases one person can sequence in a day. It is also important to point out that I am plotting productivity rather than cumulative production, and that both productivity and cost improvements include changes to new technology. So the learning curve here is sort of an average over different technologies. It is not a standard way to look at things, but it allows for a few interesting insights.
The blue line was generated by taking a ratio of fits to both the cost and productivity lines. In other words, the blue line is basically data, and it suggests that for every order of magnitude improvement in productivity you get roughly a one and a half order of magnitude reduction in cost. Here is the next point that makes me uneasy: I really have no reason to expect the current trends to maintain their present rates. New sequencing technologies may well cause both productivity and cost changes to accelerate (though I would not expect them to slow -- see, for example, my previous post "The Thousand Dollar Genome").
Forging ahead, extending the trend out to the day when technology provides for the still-mythical Thousand Dollar Genome (TGD) provides an interesting insight. At present rates, the TGD comes when an instrument allows for a productivity of one human genome per person-day. It didn't have to be that way; slightly different doubling times (slopes) in the fits to cost and productivity would have produced a different result. Frankly, I don't know if it means anything at all, but it did make me sit up and look more closely at the plot. You could even call it a weak prediction about technological change -- weak because any deviation from the present average doubling rates would break the prediction.
But even if the present rates remain steady, that doesn't mean the actual cost of sequencing to the end user falls as quickly as it has. Let's say somebody commercially produces an instrument that can actually provide a productivity of one genome per person-day. How many of those instruments might make it onto the market?
Let's estimate that one percent of the US population wants to sign up for sequencing. Those three million people would then require three million person-days worth of effort to sequence. Operating 24/7 for one year, that would require just over 2700 instruments. It will take some time before that many sequencers are available, which means that even if the technological capability exists there will be some -- probably substantial -- scarcity (the green circle on Figure1 ) keeping prices higher for some period. Given that demand will certainly extend into Europe and Asia, further elevating prices, there is no reason to think the TGD will be a practical reality until there exists competition among providers. This competition, in turn, will probably only emerge with the development of a diverse set of technologies capable of hitting the appropriate productivity threshold.
What does this imply for the sequencing market, and in particular for health care based on full genome sequencing? First, costs will stay high until there are a large number of instruments in operation, and probably until there are many different technologies available. Thus, if prices are determined solely by the market, the idea of sequencing newborns to give them a head start on maximizing their state of health will probably be out of reach for many years after the initial instrument is developed. Market pricing probably means that sequencing will remain a tool of the wealthy for many, many years to come.
So, what other foolish, over-extended observations can I make based on fitting power laws to sparse data? Just one more for the moment, and it actually doesn't depend so much on the actual data. At a productivity of one genome per person-day, you really have to start thinking about the cost of that person. Somebody will be running the machine, and that person draws a salary. Let's say that this person earns a technician's wage, which amounts with benefits to $300/day. All of a sudden (the trends are power laws, after all) that is 30% of the $1000 spent on sequencing the genome. If the margin is 10-20% of the cost, then the actual sequencing, including financial loads such as depreciation of the instrument and interest, can cost only $500. We are definitely a long time from seeing that price point.
I'll post on the learning curve for genome synthesis after I make more sense of it.
I have yet to see the print version, but evidently I make an appearance in tomorrow's Economist in a Special Report on Synthetic Biology. (Thanks for the heads-up, Bill.) I wasn't actually interviewed for the piece, but I've no objections to the text. There is an accompanying piece that forecasts the coming "Bedroom Biotech", a phrase they seem to prefer to "Garage Biology". Personally, I prefer to keep my DNA bashing to the garage rather than the bedroom. Well, okay, most but not all of my DNA bashing.
The story contains a figure showing data from 2002 on productivity changes in DNA sequencing and synthesis, redrawn from my 2003 paper, "The Pace and Proliferation of Biological Technologies", labeling them "Carlson Curves" once again. Oh well. The original paper was published in the journal Biosecurity and Bioterrorism (PDF from TMSI, html version at Kurzweilai.net). It isn't so much that I disavow the name "Carlson Curve" as I want to assert that quantitatively predicting the course of biological technologies is a questionable thing to do. As Moore made clear in his paper, what became his law is driven by the financing of expensive chip fabs -- banks require a certain payment schedule before they will loan another billion dollars for a new fab -- whereas biology is cheap and progress is much more likely to be governed by basic science and the total number of people participating in the endeavor.
Newer versions of figures from the 2003 paper, as well as additional metrics of progress in biological technologies, will be available in December with the release of "Genome Synthesis & Design Futures: Implications for the US Economy", written with my colleagues at Bio Economic Research Associates (bio-era), and funded by bio-era and the Department of Energy.
To close the circle, I should explain that the "Carlson Curves" were an attempt to figure out how fast biology is changing, an effort prompted by an essay I wrote for the inaugural Shell/Economist Writing Prize, "The World in 2050." (Here is a PDF of the original essay, which was published in 2001 as "Open Source Biology and its Impact on Industry.") I received a silver prize, rather than gold, and was always slightly miffed that The Economist only published the first place essay, but I suppose I can't complain about the outcome.
(UPDATE, 1 September 06: Here is a note about the recent Synthetic Biology story in The Economist.)
(UPDATE, 20 Feb 06: If you came here from Paul Boutin's story "Biowar for Dummies", I've noted a few corrections HERE.)
Oliver Morton's Wired Magazine article about Synthetic Biology is here. If you are looking for the "Carlson Curves", The Pace and Proliferation of Biological Technologies" is published in the journal Biosecurity and Bioterrorism. The paper is available in html at kurzweilai.net.
A note on the so-called "Carlson Curves" (Oliver Morton's phrase, not mine): The plots were meant to provide a sense of how changes in technology are bringing about improvements in productivity in the lab, rather than to provide a quantitative prediction of the future. I am not suggesting there will be a "Moore's Law" for biological technologies. Although it may be possible to extract doubling rates for some aspect of this technology, I don't know whether this analysis is very interesting. I prefer to keep it simple. As I explain in the paper, the time scale of changes in transistor density are set by planning and finance considerations for multi-billion dollar integrated circuit fabs. That doubling time has a significant influence on many billions of dollars of investment. Biology, on the other hand, is cheap, and change should come much faster. Money should be less and less of an issue as time goes on, and my guess is those curves provide a lower bound on changes in productivity.
I will try to have something tomorrow about George Church and Co's "unexpected improvement" in DNA synthesis capacity, as well as some comments about Nicholas Wade's New York Times story.