2014 U.S. Brewery Count Chart

| No Comments | No TrackBacks
As usual, note that the x-axis is decadal until 2010, and that I have tacked individual years on to the right side of the chart.

Here is the PDF for Microbrewing the Bioeconomy (2011).

2014 US Brewery Count w banner.png

Biosecurity is Everyone's Business (Part 2)

| No Comments | No TrackBacks
(Here is Part 1.)
 

Part 2. From natural security to neural security


Humans are fragile. For most of history we have lived with the expectation that we will loose the use of organs, and some of us limbs, as we age or suffer injury. But that is now changing. Prostheses are becoming more lifelike and more useful, and replacement organs have been used to save lives and restore function. But how robust are the replacement parts? The imminent prospect of technological restoration of human organs and limbs lost to injury or disease is cause to think carefully about increasing both our biological capabilities and our technological fragilities.


Technology fails us for many reasons. A particular object or application may be poorly designed or poorly constructed. Constituent materials may be faulty, or maintenance may be shoddy. Failure can result from inherent security flaws, which can be exploited directly by those with sufficient technical knowledge and skill. Failure can also be driven by clever and conniving exploits of the overall system that focus on its weakest link, almost always the human user, by inducing them to make a mistake or divulge critical information. Our centuries of experience and documentation of such failures should inform our thinking about the security of emerging technologies, particularly as we begin to fuse biology with electronic systems. The growing scope of biotechnology will therefore require constant reassessment of what vulnerabilities we are introducing through that expansion. Examining the course of other technologies provides some insight into the future of biology.


We carry powerful computers in our pockets, use the internet to gather information and access our finances, and travel the world in aircraft that are often piloted and landed by computers. We are told we can trust this technology with our financial information, our identities and social networks, and, ultimately, our lives. At the same time, technology is constantly shown to be vulnerable and fragile at a non-trivial rate -- resulting in identity theft, financial loss, and sometimes personal injury and death. We embrace technology despite well-understood risks; automobiles, electricity, fossil fuels, automation, and bicycles all kill people every day in predictable numbers. Yet we continue to use technology, integrating it further into multiple arenas in our lives, because we decide that the benefits outweigh risks.


Healthcare is one arena in which risks are multiplying. The IT security community has for some years been aware of network vulnerabilities in medical devices such as pacemakers and implantable defibrillators. The ongoing integration of networked medical devices in health care settings, an integration that is constantly introducing both new capabilities and new vulnerabilities, is now the focus of extensive efforts to improve security. The impending introduction of networked, semi-autonomous prostheses raises obvious similar concerns. Wi-fi enabled pacemakers and implantable defibrillators are just the start, as soon we will see bionic arms, legs, and eyes with network connections that allow performance monitoring and tuning.


Eventually, prostheses will not simply restore "human normal" capabilities, they will also augment human performance. I learned recently that DARPA explicitly chose to limit the strength of its robotic arm, but that can't last: science fiction, super robotic strength is coming. What happens when hackers get ahold of this technology? How will people begin to modify themselves and their robotic appendages? And, of course, the flip side of having enhanced physical capabilities is having enhanced vulnerabilities. By definition, tuning can improve or degrade performance, and this raises an important security question: who holds the password for your shiny new arm? Did someone remember to overwrite the factory default password? Is the new password susceptible to a dictionary attack? The future brings even more concerns.  Control connections to a prosthesis are bi-directional and, as the technology improves, ever better neural interfaces will eventually jack these prostheses directly into the brain. "Tickling" a robotic limb could take on a whole new meaning, providing a means to connect various kinds of external signals to the brain in new ways.


Beyond limbs, we must also consider neural connections that serve to open entirely novel senses. It is not a great leap to envision a wide range of ensuing digital-to-neural input/output devices. These technologies are evolving at a rapid rate, and through them we are on the cusp of opening up human brains to connections with a wide range of electromechanical hardware capabilities and, indeed, all the information on the internet.


Just this week saw publication of a cochlear implant that delivers a gene therapy to auditory neurons, promoting the formation of electrical connections with the implant and thereby dramatically improving the hearing response of test animals. We are used to the idea of digital music files being converted by speakers into sound waves, which enter the brain through the ear. But the cochlear implant is basically an ethernet connection wired to your auditory nerve, which in principal means any signal can be piped into your brain. How long can it be before we see experiments with a cochlear (or other) implant that enables direct conversion of arbitrary digital information into neural signals? At that point, "hearing" might extend into every information format. So, again we must ask, who holds the password to your brain implant?


Hacking the Bionic Man


As this technology is deployed in the population it is clear that there can be no final and fixed security solution. Most phone and computer users are now all too aware that new hardware, firmware, and operating systems always introduce new kinds of risks and threats. The same will be true of prostheses. The constant rat race to chase down security holes in new products upgrades will soon extend directly into human brains. As more people are exposed to medical device vulnerabilities, security awareness and improvement must become an integrated part of medical practice. This discussion can be easily extended to potential vulnerabilities that will arise from the inevitable integration into human bodies of not just electromechanical devices, but of ever more sophisticated biological technologies. The exploration of prosthesis security, loosely defined, gives some indication of the scope of the challenge ahead.


The class of things we call prostheses will soon expand beyond electromechanical devices to encompass biological objects such as 3D printed tissues and lab-grown organs. As these cell-based therapies begin to enter human clinical trials, we must assess the security of both the therapies themselves and the means used to create and administer them. If replacement organs and tissues are generated from cells derived from donors, what vulnerabilities do the donors have? How are those donor vulnerabilities passed along to the recipients? Yes, you have an immune system that does wonders most of the time. But are your natural systems up to the task of handling the biosecurity of augmented organs?


What does security even mean in this context? In addition to standard patient work-ups, should we begin to fully sequence the genomes of donor tissues, first to identify potential known health issues, and then to build a database that can be re-queried as new genetic links to disease are discovered? Are there security holes in the 3D printers and other devices used to manipulate cells and tissues? What are the long term security implications of deploying novel therapeutic tissues in large numbers of military and civilian personnel? What are the long-term security implications of using both donor and patient tissue as seeds of induced pluripotent stem cells, or of differentiating any stem cell line for use in therapies? Do we fully understand the complement of microbes and genomes that may be present in donor samples, or lying dormant in donor genomes, or that may be introduced via laboratory procedures and instruments used to process cells for use as therapies? What is the genetic security of a modified cell line or induced pluripotent stem cell? If there is a genetic modification embedded in your replacement heart tissue, where did the new DNA come from, and are you sure you know everything that it encodes? As with information technologies, we should expect that these new biological technologies will sometimes arrive with accidental vulnerabilities; they may also come with intentionally introduced back doors. The economic motivation to create new protheses, as well as to exploit vulnerabilities, will soon introduce market competition as a factor in biosecurity. 


Competition often drives perverse strategic decisions when it comes to security. Firms rush to sell hardware and software that are said to be secure, only to discover that constant updates are required to patch security holes. We are surrounded by products in endless beta. Worse yet, manufacturers have been known to sit on security holes in the naive hope that no one else will notice. Vendors sometimes appear no more literate about the security of hardware and software than are their customers. What will the world look like when eletromechanical and biological prostheses are similarly in constant states of upgrade? Who will you trust to build/print/grow a prosthesis? Are you going to place your faith in the FDA to police all these risks? (Really?) If you decide to instead place your faith in the market, how will you judge the trustworthiness of firms that sell aftermarket security solutions for your bionic leg or replacement liver?


The complexity of the task at hand is nearly overwhelming. Understanding the coming fusion of technologies will require competency in software, hardware, wetware, and security -- where are those skill sets being developed in a compatible, integrated manner? This just leads to more questions: Are there particular countries that will have a competitive advantage in this area? Are there particular countries that will be hotbeds of prosthesis malware creation and distribution?


The conception of security, whether of individuals or nation states, is going to change dramatically as we become ever more economically dependent upon the market for biological technologies. Given the spreading capability to participate and innovate in technology development, which inevitably amplifies the number and effect of vulnerabilities of all kinds, I suspect we need to re-envision at a very high level how security works.


[Coming soon: Part 3.]

Biosecurity is Everyone's Business (Part 1)

| No Comments | No TrackBacks

Part 1. The ecosystem is the enterprise


We live in a society increasingly reliant upon the fruits of nature. We consume those fruits directly, and we cultivate them as feedstocks for fuel, industrial materials, and the threads on our backs. As a measure of our dependence, revenues in the bioeconomy are rising rapidly, demonstrating a demand for biological products that is growing much faster than the global economy as a whole.


This demand represents an enormous market pull on technology development, commercialization, and, ultimately, natural resources that serve as feedstocks for biological production. Consequently, we must assess carefully the health and longevity of those resources. Unfortunately, it is becoming ever clearer that the natural systems serving to supply our demand are under severe stress. We have been assaulting nature for centuries, with the heaviest blows delivered most recently. Nature, in the most encompassing sense of the word, has been astonishingly resilient in the face of this assault. But the accumulated damage has cracked multiple holes in ecosystems around the globe. There are very clear economic costs to this damage -- costs that compound over time -- and the cumulative damage now poses a threat to the availability of the water, farmland, and organisms we rely on to feed ourselves and our economy.


I would like to clarify that I am not predicting collapse, nor that we will run out of resources; rather, I expect new technologies to continue increasing productivity and improving the human condition. Successfully developing and deploying those technologies will, obviously, further increase our economic dependency on nature. As part of that growing dependency, businesses that participate in the bioeconomy must understand and ensure the security of feedstocks, transportation links, and end use, often at a global scale. Consequently, it behooves us to thoroughly evaluate any vulnerabilities we are building into the system so that we can begin to prepare for inevitable contingencies.


Revisiting the definition of biosecurity: from national security to natural security, and beyond


Last year John Mecklin at Bulletin of the Atomic Scientists asked me to consider the security implications of the emerging conversation (or, perhaps, collision) between synthetic biology and conservation biology. This conversation started at a meeting last April at the University of Cambridge, and is summarized in a recent article in Oryx. What I came up with for BAS was an essay that cast very broadly the need to understand threats to all of the natural systems we depend on. Quantifying the economic benefit of those systems, and the risk inherent in our dependence upon them, led me directly to the concept of natural security.


Here I want to take a stab at expanding the conversation further. Rapidly rising revenues in the bioeconomy, and the rapidly expanding scope of application, must critically inform an evolving definition of biosecurity. In other words, because economic demand is driving technology proliferation, we must continually refine our understanding of what it is that we must secure and from where threats may arise.


Biosecurity has typically been interpreted as the physical security of individuals, institutions, and the food supply in the context of threats such as toxins and pathogens. These will, of course, continue to be important concerns: new influenza strains constantly emerge to cause human and animal health concerns; the (re?)emergent PEDS virus has killed an astonishing 10% of U.S. pigs this year alone; within the last few weeks there has been an alarming uptick in the number of human cases and deaths caused by MERS. Beyond these natural threats are pathogens created by state and non-state organizations, sometimes in the name of science and preparation for outbreaks, while sometimes escaping containment to cause harm. Yet, however important these events are, they are but pieces of a biosecurity puzzle that is becoming ever more complex.


Due to the large and growing contribution of the bioeconomy, no longer are governments concerned merely with the proverbial white powder produced in a state-sponsored lab, or even in a 'cave' in Afghanistan. Because economic security is now generally included in the definition of national security, the security of crops, drug production facilities, and industrial biotech will constitute an ever more important concern. Moreover, in the U.S., as made clear by the National Strategy for Countering Biological Threats(PDF), the government has established that encouraging the development and use of biological technologies in unconventional environments (i.e., "garages and basements") is central to national security. Consequently, the concept of biosecurity must comprise the entire value chain from academics and garage innovators, through production and use, to, more traditionally, the health of crops, farm animals, and humans. We must endeavor to understand, and to buttress, fragility at every link in this chain.


Beyond the security of specific links in the bioeconomy value chain we must examine the explicit and implicit connections between them, because through our behavior we connect them. We transport organisms around the world; we actively breed plants, animals, and microbes; we create new objects with flaws; we emit waste into the world. It's really not that complicated. However, we often choose to ignore these connections because acknowledging them would require us to respect them, and consequently to behave differently. But that change in behavior must be the future of biosecurity. 


From an enterprise perspective, as we rely ever more heavily on biology in our economy, so must we comprehensively define 'biosecurity' to adequately encompass relevant systems. Vulnerabilities in those systems may be introduced intentionally or accidentally. An accidental vulnerability may lie undiscovered for years, as in the case of the recently disclosed Heartbleed hole in the OpenSSL internet security protocol, until it is identified, when it becomes a threat. The risk, even in open source software, is that the vulnerability may be identified by organizations which then exploit it before it becomes widely known. This is reported to be true of the NSA's understanding and exploitation of Heartbleed at least two years in advance of its recent public announcement. Our biosecurity challenge is to carefully, and constantly, assess how the world is changing and address shortcomings as we find them. It will be a transition every bit as painful as the one we are now experiencing for hardware and software security.


(Here is Part 2.)

Scientists and engineers around the globe dream of employing biology to create new objects. The goal might be building replacement organs, electronic circuits, living houses, or cowborgs and carborgs (my favorites) that are composed of both standard electromechanical components and novel biological components. Whatever the dream, and however outlandish, we are getting closer every day.

Looking a bit further down the road, I would expect organs and tissues that have never before existed. For example, we might be able to manufacture hybrid internal organs for the cowborg that process rough biomass into renewable fuels and chemicals. Both the manufacturing process and the cowborg itself might utilize novel genetic pathways generated in DARPA's Living Foundries program. The first time I came across ideas like the cowborg was in David Brin's short story "Piecework". I've pondered this version of distributed biological manufacturing for years, pursuing the idea into microbrewing, and then to the cowborg, the economics of which I am now exploring with Steve Aldrich from bio-era.

Yet as attractive and powerful as biology is as a means for manufacturing, I am not sure it is powerful enough. Other ways that humans build things, and that we build things that build things, are likely to be part of our toolbox well into the future. Corrosion-resistant plumbing and pumps, for example, constitute very useful kit for moving around difficult fluids, and I wouldn't expect teflon to be produced biologically anytime soon. Photolithography, electrodeposition, and robotics, now emerging in the form of 3D printing, enable precise control over the position of matter, though frequently using materials and processes inimical to biology. Humans are really good at electrical and mechanical engineering, and we should build on that expertise with biological components.

Let's start with the now hypothetical cowborg. The mechanical part of a cowborg could be robotic, and could look like Big Dog, or perhaps simply a standard GPS-guided harvester, which comes standard with air conditioning and a DVD player to keep the back-up human navigation system awake. This platform would be supplemented by biological components, initially tanks of microbes, that turn raw feedstocks into complex materials and energy. Eventually, those tanks might be replaced by digestive organs and udders that produce gasoline instead of milk, where the artificial udders are enabled by advances in genetics, microbiology, and bioprinting. Realizing this vision could make biological technologies part of literally anything under the sun. In a simple but effective application along these lines, the ESA is already using "burnt bone charcoal" as a protective coating on a new solar satellite.

But there is one persistent problem with this vision: unless it is dead and processed, as in the case of the charcoal spacecraft coating, biology tends not to stay where you put it. Sometimes this will not matter, such as with many replacement transplant organs that are obviously supposed to be malleable, or with similar tissues made for drug testing. (See the recent Economist article, "Printing a bit of me", this CBS piece on Alexander Seifalian's work at University College London, and this week's remarkable news out of Anthony Atala's lab.) Otherwise, cells are usually squishy, and they tend to move around, which complicates their use in fabricating small structures that require precise positioning. So how do you use biology to build structures at the micro-scale? More specifically, how do you get biology to build the structures you want, as opposed to the structures biology usually builds?

We are getting better at directing organisms to make certain compounds via synthetic biology, and our temporal control of those processes is improving. We are inspired by the beautiful fabrication mechanisms that evolution has produced. Yet we still struggle to harness biology to build stuff. Will biological manufacturing ever be as useful as standard machining is, or as flexible as 3D printing appears it will be? I think the answer is that we will use biology where it makes sense, and we will use other methods where they make sense, and that in combination we will get the best of both worlds. What will it mean when we can program complex matter in space and time using a fusion of electromechanical control (machining and printing) biochemical control (chemistry and genetics)? There are several recent developments that point the way and demonstrate hybrid approaches that employ the 3D printing of biological inks that subsequently display growth and differentiation.

Above is a slide I used at the recent SynBERC retreat in Berkeley. On the upper left, Organovo is now shipping lab-produced liver tissue for drug testing. This tissue is not yet ready for use in transplants, but it does display all the structural and biochemical complexity of adult livers. A bit further along in development are tracheas from Harvard Biosciences, which are grown from patient stem cells on 3D-printed scaffolds (Claudia Castillo was the first recipient of a transplant like this in 2007, though her cells were grown on a cadaver windpipe first stripped of the donor's cells). And then we have the paper on the right, which really caught my eye. In that publication, viruses on a 3D woven substrate were used to reprogram human cells that were subsequently cultured on that substrate. The green cells above, which may not look like much, are the result of combining 3D fabrication of non-living materials with a biological ink (the virus), which in combination serve to physically and genetically program the differentiation and growth of mammalian cells, in this case into cartilage. That's pretty damn cool.

Dreams of building with biology

Years ago, during the 2003 "DARPA/ISAT Synthetic Biology Study", we spent considerable time discussing whether biology could be used to rationally build structures like integrated circuits. The idea isn't new: is there a way to get cells to build structures at the micro- or nano-scale that could help replace photolithography and other 2D patterning techniques used to build chips? How can humans make use of cells -- lovely, self-reproducing factories -- to construct objects at the nanometer scale of molecules like proteins, DNA, and microtubules?

Cells, of course, have dimensions in the micron range, and commercial photolithography was, even in 2003, operating at about the 25 nanometer range (now at about 15 nm). The challenge is to program cells to lay down structures much smaller than they are. Biology clearly knows how to do this already. Cells constantly manufacture and use complex molecules and assemblies that range from 1 to 100 nm. Many cells move or communicate using extensions ("processes") that are only 10-20 nanometers in width but tens microns in length. Alternatively, we might directly use synthetic DNA to construct a self-assembling scaffold at the nano-scale and then build devices on that scaffold using DNA-binding proteins. DNA origami has come a long way in the last decade and can be used to build structures that span nanometers to microns, and templating circuit elements on DNA is old news. We may even soon have batteries built on scaffolds formed by modified, self-assembling viruses. But putting all this together in a biological package that enables nanometer-scale control of fabrication across tens of centimeters, and doing it as well as lithography, and as reproducibly as lithography, has thus far proved difficult.

Conversely, starting at the macro scale, machining and 3D printing work pretty well from meters down to hundreds of microns. Below that length scale we can employ photolithography and other microfabrication methods, which can be used to produce high volumes of inexpensive objects in parallel, but which also tend to have quite high cost barriers. Transistors are so cheap that they are basically free on a per unit basis, while a new chip fab now costs Intel about $10 billion.

My experiences working on different aspects of these problems suggest to me that, eventually, we will learn to exploit the strengths of each of the relevant technologies, just as we learn to deal with their weaknesses; through the combination of these technologies we will build objects and systems that we can only dream of today.

Squishy construction

Staring through a microscope at fly brains for hours on end provides useful insights into the difference between anatomy and physiology, between construction and function. In my case, those hours were spent learning to find a particular neuron (known as H1) that is the output of the blowfly motion measurement and computation system. The absolute location of H1 varies from fly to fly, but eventually I learned to find H1 relative to other anatomical landmarks and to place my electrode within recording range (a few tens of microns) on the first or second try. It's been long known that the topological architecture (the connectivity, or wiring diagram) of fly brains is identical between individuals of a given species, even as the physical architecture (the locations of neurons) varies greatly. This is the difference between physiology and anatomy.

The electrical and computational output of H1 is extremely consistent between individuals, which is what makes flies such great experimental systems for neurobiology. This is, of course, because evolution has optimized the way these brains work -- their computational performance -- without the constraint that all the bits and pieces must be in exactly the same place in every brain. Fly brains are constructed of squishy matter, but the computational architecture is quite robust. Over the last twenty years, humans have learned to grow various kinds of neurons in dishes, and to coax them into connecting in interesting ways, but it is usually very hard to get those cells to position themselves physically exactly where you want them, with the sort of precision we regularly achieve with other forms of matter.

Crystalline construction

The first semiconductor processing instrument I laid hands on in 1995 was a stepper. This critical bit of kit projects UV light through a mask, which contains the image of a structure or circuit, onto the surface of a photoresist-covered silicon wafer. The UV light alters the chemical structure of the photoresist, which after further processing eventually enables the underlying silicon to be chemically etched in a pattern identical to the mask. Metal or other chemicals can be similarly patterned. After each exposure, the stepper automatically shifts the wafer over, thereby creating an array of structures or circuits on each wager. This process enables many copies of a chip to be packed onto a single silicon wafer and processed in parallel. The instruments on which I learned to process silicon could handle ~10 cm diameter wafers. Now the standard is about 30 cm, because putting more chips on a wafer reduces marginal processing costs. But it isn't cheap to assemble the infrastructure to make all this work. The particular stepper I used (this very instrument, as a matter of fact), which had been donated to the Nanofabrication Facility at Cornell and which was ancient by the time I got to it, contained a quartz lens I was told cost about $1 million all by itself. The kit used in a modern chip fab is far more expensive, and the chemical processing used to fabricate chips is quite inimical to cells. Post-processing, silicon chips can be treated in ways that encourages cells to grow on them and even to form electrical connections, but the overhead to get to that point is quite high.

Arbitrary construction

The advent of 3D printers enables the reasonably precise positioning of materials just about anywhere. Depending on how much you want to spend, you can print with different inks: plastics, composites, metals, and even cells. This lets you put stuff where you want it. The press is constantly full of interesting new examples of 3D printing, including clothes, bone replacements, objects d'art, gun components, and parts for airplanes. As promising as all this is, the utility of printing is still limited by the step size (the smallest increment of the position of the print head) and the spot size (the smallest amount of stuff the print head can spit out) of the printer itself. Moreover, printed parts are usually static: once you print them, they just sit there. But these limitations are already being overcome by using more complex inks.

Hybrid construction

If the ink used in the printer has the capacity to change after it gets printed, then you have introduced a temporal dimension into your process: now you have 4D printing. Typically, 4D printing refers to objects whose shape or mechanical properties can be dynamically controlled after construction, as with these 3D objects that fold up after being printed as 2D objects. But beyond this, if you combine squishy, crystalline, and arbitrary construction, you get a set of hybrid construction techniques that allows programming matter from the nanoscale to the macroscale in both time and space.

Carlson_build_with_bio2.png

Above is a slide from a 2010 DARPA study on the Future of Manufacturing, from a talk in which I tried to articulate the utility of mashing up 3D printing and biotech. We have already seen the first 3D printed organs, as described earlier. Constructed using inks that contain cells, even the initial examples are physiologically similar to natural organs. Beyond tracheas, printed or lab-growth organs aren't yet ready for general use as transplants, but they are already being used to screen drugs and other chemicals for their utility and toxicity. Inks could also consist of: small molecules (i.e. chemicals) that react with each other or the environment after printing; DNA and proteins that serve structural, functional (say, electronic), or even genetic roles after printing; viruses that form structures or are that are intended to interact biologically with later layers; cells that interact with each other or follow some developmental program defined genetically or by the substrate, as demonstrated in principle by the cartilage paper above.

The ability to program the three-dimensional growth and development of complex structures will have transformative impacts throughout our manufacturing processes, and therefore throughout our economy. The obvious immediate applications include patient-specific organs and materials such as leather, bone, chitin, or even keratin (think vat-grown ivory), that are used in contexts very different than we are used to today.

Carlson_build_with_bio4.png
It is hard to predict where this is going, of course, but any function we now understand for molecules or cells can be included in programmable inks. Simple 2-part chemical reactions will eventually be common in inks, eventually transitioning to more complex inks containing multiple reactants, including enzymes and substrates. Eventually, programmable printer inks will employ the full complement of genes and biochemistry present in viruses, bacteria, and eukaryotic cells. Beyond existing genetics and biochemistry, new enzymes and genetic pathways will provide materials we have never before laid hands on. Within DARPA's Living Foundries program is the 1000 Molecules program, which recently awarded contracts to use biology to generate "chemical building blocks for accessing radical new materials that are impossible to create with traditional petroleum-based feedstocks".

Think about that for a moment: it turns out that of the entire theoretical space of compounds we can imagine, synthetic chemistry can only provide access to a relatively small sample. Biology, on the other hand, in particular novel systems of (potentially novel) enzymes, can be programmed to synthesize a much wider range of compounds. We are just learning how to design and construct these pathways; the world is going to look very different in ten years' time. Consequently, as these technologies come to fruition, we will learn to use new materials to build objects that may be printed at one length scale, say centimeters, and that grow and develop at length scales ranging from nanometers to hundreds of meters.

Just as hybrid construction that combines the features of printers and inks will enable manufacturing on widely ranging length scales, so will it give us access to a wide range of time scales. A 3D printer presently runs on fairly understandable human time scales of seconds to hours. For the time being, we are still learning how to control print heads and robotic arms that position materials, so they move fairly slowly. Over time, the print head will inevitably be able to move on time scales at least as short as milliseconds. Complex inks will then extend the reach of the fabrication process into the nanoseconds on the short end, and into the centuries on the long end.

Carlson_build_with_bio3.png
I will be the first to admit that I haven't the slightest idea what artifacts made in this way will do or look like. Perhaps we will build/grow trees the size of redwoods that produce fruit containing libations rivaling the best wine and beer. Perhaps we will build/grow animals that languidly swim at the surface of the ocean, extracting raw materials from seawater and photosynthesizing compounds that don't even exist today but that critical to the future economy.

These examples will certainly prove hopelessly naive. Some problems will turn out to be harder than they appear today, and other problems will turn out to be much easier than they appear today. But the constraints of the past, including the oft-uttered phrase "biology doesn't work that way", do not apply. The future of engineering is not about understanding biology as we find it today, but rather about programming biology as we will build it tomorrow.

What I can say is that we are now making substantive progress in learning to manipulate matter, and indeed to program matter. Science fiction has covered this ground many times, sometimes well, sometimes poorly. But now we are doing it in the real world, and sketches like those on the slides above provide something of a map to figure out where we might be headed and what our technical capabilities will be like many years hence. The details are certainly difficult to discern, but if you step back a bit, and let your eyes defocus, the overall trajectory becomes clear.

This is a path that John von Neumann and Norbert Wiener set out on many decades ago. Physics and mathematics taught us what the rough possibilities should be. Chemistry and materials science have demonstrated many detailed examples of specific arrangements of atoms that behave physically in specific ways. Control theory has taught us both how organisms behave over time and how to build robots that behave in similar ways. Now we are learning to program biology at the molecular level. The space of the possible, of the achievable, is expanding on a daily basis. It is going to be an interesting ride.

The most important paragraph of The Gene Factory

| No Comments | No TrackBacks
The most important paragraph of Michael Specter's story about BGI:

"In the United States and in the West, you have a certain way," [BGI President Jian Wang] continued, smiling and waving his arms merrily. "You feel you are advanced and you are the best. Blah, blah, blah. You follow all these rules and have all these protocols and laws and regulations. You need somebody to change it. To blow it up. For the last five hundred years, you have been leading the way with innovation. We are no longer interested in following."
[Given the mix-up in the publication date of 2015, I have now deleted the original post. I have appended the comments from the original post to the bottom of this post.]

It's time once again to see how quickly the world of biological technologies is changing. The story is mixed, in part because it is getting harder to find useful data, and in part because it is getting harder to generate appropriate metrics. 

Sequencing and synthesis productivity

I'll start with the productivity plot, as this one isn't new. For a discussion of the substantial performance increase in sequencing compared to Moore's Law, as well as the difficulty of finding this data, please see this post. If nothing else, keep two features of the plot in mind: 1) the consistency of the pace of Moore's Law and 2) the inconsistency and pace of sequencing productivity. Illumina appears to be the primary driver, and beneficiary, of improvements in productivity at the moment, especially if you are looking at share prices. It looks like the recently announced NextSeq and Hiseq instruments will provide substantially higher productivities (hand waving, I would say the next datum will come in another order of magnitude higher), but I think I need a bit more data before officially putting another point on the plot. Based on Eric Check Hayden's coverage at Nature, it seems that the new instruments should also provide substantial price improvements, which I get into below.

As for synthesis productivity, there have been no new commercially available instruments released for many years. sDNA providers are clearly pushing productivity gains in house, but no one outside those companies has access to productivity data.
Carlson_DNA_Prod_Feb2013.png
DNA sequencing and synthesis prices

The most important thing to notice about the plots below is that prices have stopped falling precipitously. If you hear or read anyone asserting that costs are falling exponentially, you can politely refer them to the data (modulo the putative performance of the new Illumina instruments). We might again see exponential price decreases, but that will depend on a combination of technical innovation, demand, and competition, and I refer the reader to previous posts on the subject. Note that prices not falling isn't necessarily bad and doesn't mean the industry is somehow stagnant. Instead, it means that revenues in these sectors are probably not falling, which will certainly be welcomed by the companies involved. As I described a couple of weeks ago, and in a Congressional briefing in November, revenues in biotech continue to climb steeply.

The second important thing to notice about these plots is that I have changed the name of the metric from "cost" to "price". Previously, I had decided that this distinction amounted to no real difference for my purposes. Now, however, the world has changed, and cost and price are very different concepts for anyone thinking about the future of DNA. Previously, there was at times an order of magnitude change in cost from year to year, and keeping track of the difference between cost and price didn't matter. In a period when change is much slower, that difference becomes much more important. Moreover, as the industry becomes larger, more established, and generally more important for the economy, we should all take more care in distinguishing between concepts like cost to whom and price for whom.

In the plot that follows, the price is for finished, not raw, sequencing.
Carlson_Price_Seq_Synth_Feb2014.png
And here is a plot only of oligo and gene-length DNA:
Carlson_Price_sDNA_Feb2014.png
What does all this mean? Illumina's instruments are now responsible for such a high percentage of sequencing output that the company is effectively setting prices for the entire industry. Illumina is being pushed by competition to increase performance, but this does not necessarily translate into lower prices. It doesn't behoove Illumina to drop prices at this point, and we won't see any substantial decrease until a serious competitor shows up and starts threatening Illumina's market share. The absence of real competition is the primary reason sequencing prices have flattened out over the last couple of data points.

I pulled the final datum on the sequencing curve from the NIH; the title on the NIH curve is "cost", but as it includes indirect academic costs I am going to fudge and call it "price". I notice that the NIH is now publishing two sequencing cost curves, and I'll bet that the important differences between them are too subtle for most viewers. One curve shows cost per megabase of raw sequence - that is, data straight off the instruments - and the other curve shows cost per finished human genome (assuming ~30X coverage of 3x10^9 bases). The cost per base of that finished sequencing is a couple orders of magnitude higher than the cost of the raw data. On the Hiseq X data sheet, Illumina has boldly put a point on the cost per human genome curve at $1000. But I have left it off the above plot for the time being; the performance and cost claimed by Illumina are just for human genomes rather than for arbitrary de novo sequencing. Mick Watson dug into this, and his sources inside Illumina claim that this limitation is in the software, rather than the hardware or the wetware, in which case a relatively simple upgrade could dramatically expand the utility of the instrument. Or perhaps the "de novo sequencing level" automatically unlocks after you spend $20 million in reagents. (Mick also has some strong opinions about the role of competition in pushing the development of these instruments, which I got into a few months ago.) 

Synthesis prices have slowed for entirely different reasons. Again, I have covered this ground in many other posts, so I won't belabor it here. 

Note that the oligo prices above are for column-based synthesis, and that oligos synthesized on arrays are much less expensive. However, array synthesis comes with the usual caveat that the quality is generally lower, unless you are getting your DNA from Agilent, which probably means you are getting your dsDNA from Gen9

Note also that the distinction between the price of oligos and the price of double-stranded sDNA is becoming less useful. Whether you are ordering from Life/Thermo or from your local academic facility, the cost of producing oligos is now, in most cases, independent of their length. That's because the cost of capital (including rent, insurance, labor, etc) is now more significant than the cost of goods. Consequently, the price reflects the cost of capital rather than the cost of goods. Moreover, the cost of the columns, reagents, and shipping tubes is certainly more than the cost of the atoms in the sDNA you are ostensibly paying for. Once you get into longer oligos (substantially larger than 50-mers) this relationship breaks down and the sDNA is more expensive. But, at this point in time, most people aren't going to use longer oligos to assemble genes unless they have a tricky job that doesn't work using short oligos.

Looking forward, I suspect oligos aren't going to get much cheaper unless someone sorts out how to either 1) replace the requisite human labor and thereby reduce the cost of capital, or 2) finally replace the phosphoramidite chemistry that the industry relies upon. I know people have been talking about new synthesis chemistries for many years, but I have not seen anything close to market.

Even the cost of double-stranded sDNA depends less strongly on length than it used to. For example, IDT's gBlocks come at prices that are constant across quite substantial ranges in length. Moreover, part of the decrease in price for these products is embedded in the fact that you are buying smaller chunks of DNA that you then must assemble and integrate into your organism of choice. The longer gBlocks come in as low as ~$0.15/base, but you have to spend time and labor in house in order to do anything with them. Finally, so far as I know, we are still waiting for Gen9 and Cambrian Genomics to ship DNA at the prices they have suggested are possible. 

How much should we care about the price of sDNA?

I recently had a chat with someone who has purchased and assembled an absolutely enormous amount of sDNA over the last decade. He suggested that if prices fell by another order of magnitude, he could switch completely to outsourced assembly. This is an interesting claim, and potentially an interesting "tipping point". However, what this person really needs is not just sDNA, but sDNA integrated in a particular way into a particular genome operating in a particular host. And, of course, the integration and testing of the new genome in the host organism is where most of the cost is. Given the wide variety of emerging applications, and the growing array of hosts/chassis, it isn't clear that any given technology or firm will be able to provide arbitrary synthetic sequences incorporated into arbitrary hosts.

Consequently, as I have described before, I suspect that we aren't going to see a huge uptake in orders for sDNA until cheaper genes and circuits are coupled to reductions in cost for the rest of the build, test, and measurement cycle. Despite all the talk recently about organism fabs and outsourced testing, I suggest that what will really make a difference is providing every lab and innovator with adequate tools and infrastructure to do their own complete test and measurement. We should look to progress in pushing all facets of engineering capacity for biological systems, not just on reducing the cost of reading old instructions and writing new ones.

----

Comments from original post follow.

George Church:

"the performance and cost claimed by Illumina are just for human genomes rather than for arbitrary de novo sequencing."  --Rob
 (genome.gov/images/content/cost_per_genome.jpg )  But most of the curve has been based on human genome sequencing until now.  Why exclude human, rather than having a separate curve for "de novo"?  Human genomes constitute a huge and compelling market.    -- George  

"oligos synthesized on arrays are much less expensive. However, array synthesis comes with the usual caveat that the quality is generally lower, unless you are getting your DNA from Agilent"  -- Rob
So why exclude Agilent from the curve? -- George

"we aren't going to see a huge uptake in orders for sDNA until cheaper genes and circuits are coupled to reductions in cost for the rest of the build, test, and measurement cycle." --Rob
Is this the sort of enabling technology needed?: arep.med.harvard.edu/pdf/Goodman_Sci_13.pdf

My response to George:

George,

Thanks for your comments. I am not sure what you might mean by "most of the curve has been based on human genome sequencing". From my first efforts in 2000 (published initially in 2003), I have tried to use data that is more general. It is true that human genomes constitute a large market, but they aren't the only market. By definition, if you are interested in sequencing or building any other organism, then instruments that specialize in sequencing humans are of limited relevance. It may also be true that the development of new sequencing technologies and instruments has been driven by human sequencing, but that is beside the point. It may even be true that the new Illumina systems can be just as easily used to sequencing mammoths, but that isn't happening yet. I have been doing my best to track the cost, and now the price, of de novo sequencing.

As I mention in the post, it is time that everyone, including me, started being more careful about the difference between cost and price. This brings me to oligos.

Agilent oligos are a special case. So far as I know, only Gen9 is using Agilent oligos as raw material to build genes. As you know, Cambrian Genomics is using arrays produced using technology developed at Combimatrix, and in any event isn't yet on the market. It is my understanding that, aside from Gen9, Agilent's arrays are primarily used for analysis rather than for building anything. Therefore, the market *price* of Agilent oligos is irrelevant to anyone except Gen9.

If the *cost* of Agilent oligos to Gen9 were reflected in the *price* of the genes sold by Gen9, or if those oligos were more broadly used, then I would be more interested in including them on the price curve. So far as I am aware, the price for the average customer at Gen9 is in the neighborhood of $.15-.18 per base. I've heard Drew Endy speak of a "friends and family" (all academics?) price of ~$.09 from Gen9, but that does not appear to be available to all customers, so I will leave it off the plot for now.

All this comes down to the obvious fact that, as the industry matures and becomes supported more by business-to-business sales rather than being subsidized by government grants and artificially cheap academic labor, the actual cost and actual price start to matter a great deal. Deextinction, in particular, might be an example where an academic/non-profit project might benefit from low cost (primarily cost of goods and cost of labor) that would be unachievable on the broader market where the price would set by 1) keeping the doors of a business open, 2) return on capital, and 3) competition, not necessarily in that order. The academic cost of developing, demonstrating, and even using technologies is almost always very different from the eventual market price of those technologies.

The bottom line is that, from day one, I have been trying to understand the impact of biological technologies on the economy. This impact is most directly felt, and tracked, via the price that most customers pay for goods and services. I am always looking to improve the metrics I use, and if you have suggestions about how to do this better I am all ears.

Finally, yes, the papers you cite (above and on the Deexctinction mailing list) describe the sort of thing the could help reduce engineering costs. Ultimately technologies like those will reduce the market price of products resulting from that engineering process. I look forward to seeing more, and also to seeing this technology utilized in the market.

Thanks again for your thoughtful questions.
I recently rediscovered this piece, which I had forgotten I had written as a submission to the development process for the National Bioeconomy Blueprint. The #GMDP numbers are obviously a bit out of date: in 2012, US revenues from genetically modified systems reached at least $350 billion, or ~2.5% of GDP.

"Building a 21st Century Bioeconomy: Fostering Economic and Physical Security Through Public-Private Partnerships and a National Network of Community Labs" (click image for PDF).



The bioeconomy continues to emerge as a significant component of the U.S. economy. Domestic revenues from genetically modified systems are growing at approximately 15% annually, much faster than the economy as a whole. Around the world an ever larger number of countries have articulated strategies that explicitly identify biotechnology as critical to economic growth. The U.K., for example, has gone so far as to explicitly name synthetic biology as one of the "eight great technologies" that will propel economic growth and has announced more than ₤160 M (or about 2 per capita) for research, development, and commercialization of synthetic biology.


GMDP


As I announced during a Congressional Briefing in November, the total 2012 U.S. revenues from genetically modified systems, hereafter the Genetically Modified Domestic Product (GMDP), reached at least $350 billion, the equivalent of approximately 2.5% of GDP, up from $300 billion in 2010. For comparison, according to IHS iSuppli, the 2012 global revenues for the semiconductor industry amounted to $322 billion. Remarkably, assuming a 2011-12 GDP annual growth rate of 2.5%, the two year, $50 billion increase in GMDP accounted for almost 7% of total U.S. GDP growth.


Due to differences in regulatory structure, financing, and, consequently, pace of development and commercialization across the industry, the GMDP naturally breaks down into the sub-sectors of biotech drugs (biologics), GM crops, and industrial biotechnology.


Biologics


In 2012, global revenues from biologics reached $125 billion. In the U.S., domestic revenues from biologics reached more nearly $100 billion, although this figure includes $28 billion in revenues accruing to companies such as Genentech, Zymogenetics, and Genzyme that are now wholly owned by overseas entities. Domestic U.S. clinical demand for biologics rose about 5%, reaching almost $54 billion in sales in 2011, indicating that the U.S. continues to enjoy a substantial positive balance of payments by biologics sold in international markets.


GM Crops


In 2012, global planting of GM crops increased by 6%, reaching a total of 170 million hectares. Of the 17 million farmers chose to plant GM crops, more that 15 million were "resource poor farmers in developing countries". In the U.S., where farmers planted 40% of the total GM area, GM corn, cotton, and soy held steady at approximately 90% penetration, with GM sugar beets planted at about the 95% level. Based on average crop revenue figures compiled by the USDA, I estimate that in 2012 the combination of biotech seeds and farm-level revenues reached $125 billion in the U.S.


Industrial Biotechnology


U.S. revenues from industrial biotech (fuels, enzymes, and materials) reached at least $125 billion in 2012. The accuracy of this estimate continues to suffer in comparison to revenues from biologics and GM crops due to the quality of available data. For the purposes of this post, I am temporarily relying on an estimate provided by Agilent Technologies, as recently described by Darlene Solomon. The internal breakdown of the $125 billion in business-to-business sales is quite interesting: $66 billion in chemicals, $30 billion in biofuels, $16 billion in biologics feedstocks, $12 billion in the food and ag, and $1 billion in emerging markets. (Agilent did not provide any greater specificity on how these areas were defined or how the numbers were derived.) As I have been predicting for several years, it appears that chemicals have eclipsed fuels as the largest component of industrial biotech revenues. Finally, note that, at the level of consumers, the ultimate economic impact of these revenues is probably larger than $125 billion.


More Work To Do


It is important to recognize that the preceding estimates are relatively inaccurate compared to those describing other parts of the U.S. economy. While I have previously estimated revenues from biopharmaceuticals ("biologics") and GM crops using corporate financial reporting and data collected by the USDA, respectively, revenues from industrial biotechnology are poorly constrained because no relevant data is gathered by the U.S. government or provided by industry (see previous reports on this topic for an in depth discussion). The Agilent numbers are a welcome additional bit of information, but we really need to have better data for, and analysis of, the GMDP in order to understand the larger impacts on our economy and society. Among other details, we need to understand the skill base, employment, and also generate historical estimates in order to sort out what the longer term trends look like. As I mentioned during my presentation at SynBioBeta in November, I am launching a new non-profit to take up this task. More on this soon.


#GMDP = Genetically Modified Domestic Product

BAS: From national security to natural security

| No Comments | No TrackBacks
Here is my recent essay in Bulletin of the Atomic Scientists: "From national security to natural security".

The first few paragraphs:

From 10,000 meters up, the impact of humans on the Earth is clear. Cities spanning kilometers are connected by roadways stretching to the horizon. Crowding the spaces in between are fields that supply food and industrial feedstock to satisfy a variety of human hungers. These fields feed humanity. Through stewardship we maintain their productivity and thus sustain societies that extend around the globe; if these fields fall into ill health, or if we push them into sickness, we risk the fate of those same societies.

Humans have a long history of modifying the living systems they rely on. Forests in Europe and North America have been felled for timber and have regrown, while other large tracts of land around the world have been completely cleared for use in agriculture. The animals and plants we humans eat on a regular basis have been selected and bred over millennia to suit the human palate and digestive tract. All these crops and products are shipped and consumed globally to satisfy burgeoning demand.

Our technology and trade thereby support a previously unimaginable quality of life for a previously impossible number of people. Humanity has kept Malthus at bay through centuries of growth. Yet our increasing numbers impose a load that is now impacting nature's capacity to support human societies. This stress comes at a time when ever-larger numbers of humans demand more: more food, more clean water, more energy, more education, more entertainment, more more.

Increasing human demand raises the question of supply, and of the costs of meeting that supply. How we choose to spend or to conserve land, water, and air to meet our needs clearly impacts other organisms that also depend on these resources. Nature has intrinsic value for many people in the form of individual species, ecosystems, and wilderness; nature also constitutes critical infrastructure in the form of ecosystems that keep us alive. That infrastructure has quantifiable economic value. Consequently, nature, and the change we induce in it, is clearly interwoven with our economy. That is, the security and health of human societies depends explicitly upon the security and health of natural systems. Therefore, as economic security is now officially considered as part and parcel of national security strategy, it is time to expand our understanding of national security to include natural security.

Here is the main page for the 5 November, 2013 Congressional Briefing in the U.S. Senate, Tooling the U.S. Bioeconomy: Synthetic Biology. Speakers included Mary Maxon, Darlene Solomon, and Chris Voigt. Here is my contribution:

 

Rob Carlson, Components and Potential of the Growing Bioeconomy from ACS Science & the Congress on Vimeo.


And here is the Q&A following the presentations, during which we got into issues of risk, security, public acceptance, etc:

Archives