On DNA and Transistors

Here is a short post to clarify some important differences between the economics of markets for DNA and for transistors. I keep getting asked related questions, so I decided to elaborate here.

But first, new cost curves for reading and writing DNA. The occasion is some new data gleaned from a somewhat out of the way source, the Genscript IPO Prospectus. It turns out that, while preparing their IPO docs, Genscript hired Frost & Sullivan to do market survey across much of life sciences. The Prospectus then puts Genscript's revenues in the context of the global market for synthetic DNA, which together provide some nice anchors for discussing how things are changing (or not).

So, with no further ado, Frost & Sullivan found that the 2014 global market for oligos was $241 million, and the global market for genes was $137 million. (Note that I tweeted out larger estimates a few weeks ago when I had not yet read the whole document.) Genscript reports that they received $35 million in 2014 for gene synthesis, for 25.6% of the market, which they claim puts them in the pole position globally. Genscript further reports that the price for genes in 2014 was $.34 per base pair. This sounds much too high to me, so it must be based on duplex synthesis, which would bring the linear per base cost down to $.17 per base, which sounds much more reasonable to me because it is more consistent with what I hear on the street. (It may be that Gen9 is shipping genes at $.07 per base, but I don't know anyone outside of academia who is paying that low a rate.) If you combine the price per base and the size of the market, you get about 1 billion bases worth of genes shipped in 2014 (so a million genes, give or take). This is consistent with Ginkgo's assertions saying that their 100 million base deal with Twist was the equivalent of 10% of the global gene market in 2015. For oligos, if you combine Genscript's reported average price per base, $.05, with the market size you get about 4.8 billion bases worth of oligos shipped in 2014. Frost & Sullivan thinks that from 2015 to 2019 the oligo market CAGR will be 6.6% and the gene synthesis market will come in at 14.7%.

For the sequencing, I have capitulated and put the NextSeq $1000 human genome price point on the plot. This instrument is optimized to sequence human DNA, and I can testify personally that sequencing arbitrary DNA is more expensive because you have to work up your own processes and software. But I am tired of arguing with people. So use the plot with those caveats in mind.

NOTE: Replaces prior plot with an error in sequencing price.

NOTE: Replaces prior plot with an error in sequencing price.

What is most remarkable about these numbers is how small they are. The way I usually gather data for these curves is to chat with people in the industry, mine publications, and spot check price lists. All that led me to estimate that the gene synthesis market was about $350 million (and has been for years) and the oligo market was in the neighborhood of $700 million (and has been for years).

If the gene synthesis market is really only $137 million, with four or 5 companies vying for market share, then that is quite an eye opener. Even if that is off by a factor of two or three, getting closer to my estimate of $350 million, that just isn't a very big market to play in. A ~15% CAGR is nothing to sneeze at, usually, and that is a doubling rate of about 5 years. But the price of genes is now falling by 15% every 3-4 years (or only about 5% annually). So, for the overall dollar size of the market to grow at 15%, the number of genes shipped every year has to grow at close to 20% annually. That's about 200 million additional bases (or ~200,000 more genes) ordered in 2016 compared to 2015. That seems quite large to me. How many users can you think of who are ramping up their ability to design or use synthetic genes by 20% a year? Obviously Ginkgo, for one. As it happens, I do know of a small number of other such users, but added together they do not come close to constituting that 20% overall increase. All this suggests to me that the dollar value of the gene synthesis market will be hard pressed to keep up with Frost & Sullivan estimate of 14.7% CAGR, at least in the near term. As usual, I will be happy to be wrong about this, and happy to celebrate faster growth in the industry. But bring me data.

People in the industry keep insisting that once the price of genes falls far enough, the ~$3 billion market for cloning will open up to synthetic DNA. I have been hearing that story for a decade. And then price isn't the only factor. To play in the cloning market, synthesis companies would actually have to be able to deliver genes and plasmids faster than cloning. Given that I'm hearing delivery times for synthetic genes are running at weeks, to months, to "we're working on it", I don't see people switching en mass to synthetic genes until the performance improves. If it costs more to have your staff waiting for genes to show up by FedEx than to have them bash the DNA by hand, they aren't going to order synthetic DNA.

And then what happens if the price of genes starts falling rapidly again? Or, forget rapidly, what about modestly? What if a new technology comes in and outcompetes standard phosphoramidite chemistry? The demand for synthetic DNA could accelerate and the total market size still might be stagnant, or even fall. It doesn't take much to turn this into a race to the bottom. For these and other reasons, I just don't see the gene synthesis market growing very quickly over the next 5 or so years.

Which brings me to transistors. The market for DNA is very unlike the market for transistors, because the role of DNA in product development and manufacturing is very unlike the role of transistors. Analogies are tremendously useful in thinking about the future of technologies, but only to a point; the unwary may miss differences that are just as important as the similarities.

For example, the computer in your pocket fits there because it contains orders of magnitude more transistors than a desktop machine did fifteen years ago. Next year, you will want even more transistors in your pocket, or on your wrist, which will give you access to even greater computational power in the cloud. Those transistors are manufactured in facilities now costing billions of dollars apiece, a trend driven by our evidently insatiable demand for more and more computational power and bandwidth access embedded in every product that we buy. Here is the important bit: the total market value for transistors has grown for decades precisely because the total number of transistors shipped has climbed even faster than the cost per transistor has fallen.

In contrast, biological manufacturing requires only one copy of the correct DNA sequence to produce billions in value. That DNA may code for just one protein used as a pharmaceutical, or it may code for an entire enzymatic pathway that can produce any molecule now derived from a barrel of petroleum. Prototyping that pathway will require many experiments, and therefore many different versions of genes and genetic pathways. Yet once the final sequence is identified and embedded within a production organism, that sequence will be copied as the organism grows and reproduces, terminating the need for synthetic DNA in manufacturing any given product. The industrial scaling of gene synthesis is completely different than that of semiconductors.

Tim Cook is Defending Your Brain

Should the government have the right to troll through your thoughts and memories? That seems like a question for a "Minority Report" or "Matrix" future, but legal precedent is being set today. This is what is really at stake in an emerging tussle between Washington DC and Silicon Valley.

The Internets areall abuzz with Apple's refusal to hack an iPhone belonging to an accused terrorist. The FBI has served a court order on Apple, based on the All Writs Act of 1789, requiring Apple to break the lock that limits the number of times a passcode can be tried. Since law enforcement has been unable to crack the security of iOS on its own, it wants Apple to write special software to do the job. Here is Wired's summary. This NYT story has additional good background. The short version: should law enforcement and intelligence agencies be able to compel corporations to hack devices owned by citizens and entrusted with their sensitive information? 

Apple CEO Tim Cook published a letter saying no, thank you, because weakening the security of iPhones would be bad for his customers and "has implications far beyond the legal case at hand". Read Cook's letter; it is thoughtful. The FBI says it is just about this one phone and "isn't about trying to set a precedent," in the words of FBI Director James Comey. But this language is neither accurate nor wise — and it is important to say so.

Once the software is written, the U.S. government can hardly argue it will never be used again, nor that it will never be stolen off government servers. And since the point of the hack is to be able to push it onto a phone without consent (which is itself a backdoor that needs closing), this software would allow breaking the locks on any susceptible iPhone, anywhere. Many commentators have observed that any effort to hack iOS this once would facilitate repetitions, and any general weakening of smartphone security could easily be exploited by governments or groups less concerned about due process, privacy, or human rights. (And you do have to wonder whether Tim Cook's position here is influenced by his experience as a gay man, a demographic that has been persecuted, if not actually prosecuted, merely for thought and intent by the same organization now sitting on the other side of the table. He knows a thing or two about privacy.)  U.S. Senator Ron Wyden has a nice take on these issues. Yet while these are critically important concerns for modern life, they are shortsighted. There is much more at stake here than just one phone, or even the fate of a one particular company. The bigger, longer term issue is whether governments should have access to electronic devices that we rely on in daily life, particularly when those devices are becoming extensions of our bodies and brains. Indeed, these devices will soon be integrated into our bodies — and into our brains.

Hacking electronically-networked brains sounds like science fiction. That is largely because there has been so much science fiction produced about neural interfaces, Matrices, and the like. We are used to thinking of such technology as years, or maybe decades, off. But these devices are already a reality, and will only become more sophisticated and prevalent over the coming decades. Policy, as usual, is way behind.

My concern, as usual, is less about the hubbub in the press today and instead about where this all leads in ten years. The security strategy and policy we implement today should be designed for a future in which neural interfaces are commonplace. Unfortunately, today's politicians and law enforcement are happy to set legal precent that will create massive insecurity in just a few years. We can be sure that any precedent of access to personal electronic devices adopted today, particularly any precedent in which a major corporation is forced to write new software to hack a device, will be cited at least decades hence, when technology that connects hardware to our wetware is certain to be common. After all, the FBI is now proposing that a law from 1789 applies perfectly well in 2016, allowing a judge to "conscript Apple into government service", and many of our political representatives appear delighted to concur. A brief tour of current technology and security flaws sets the stage for how bad it is likely to get.

As I suggested a couple of years ago, hospital networks and medical devices are examples of existing critical vulnerabilities. Just in the last week hackers took control of computers and devices in a Los Angeles hospital, and only a few days later received a ransom to restore access and functionality. We will be seeing more of this. The targets are soft, and when attacked they have little choice but to pay when patients' health and lives are on the line. What are hospitals going to do when they are suddenly locked out of all the ventilators or morphine pumps in the ICU? Yes, yes, they should harden their security. But they won't be fully successful, and additional ransom events will inevitably happen. More patients will be exposed to more such flaws as they begin to rely more on medical devices to maintain their health. Now consider where this trend is headed: what sorts of security problems will we create by implanting those medical devices into our bodies?

Already on the market are cochlear implants that are essentially ethernet connections to the brain, although they are not physically configured that way today. An external circuit converts sound into signals that directly stimulate the auditory nerves. But who holds the password for the hardware? What other sorts of signals can be piped into the auditory nerve? This sort of security concern, in which networked electronics implanted in our bodies create security holes, has actually been with us for more than a decade. When serving as Vice President, Dick Cheney had the wireless networking on his fully-implanted heart defibrillator disabled because it was perceived as a threat. The device contained a test mode that could exploited to fully discharge the battery into the surrounding tissue. This might be called a fatal flaw. And it will only get worse.

DARPA has already limited the strength of a recently developed, fully articulated bionic arm to "human normal" precisely because the organization is worried about hacking. These prosthetics are networked in order to tune their function and provide diagnostic information. Hacking is inevitable, by users interested in modifications and by miscreants interested in mischief.

Not content to replace damaged limbs, within the last few months DARPA has announced a program to develop what the staff sometimes calls a "cortical modem". DARPA is quite serious about developing a device that will provide direct connections between the internet and the brain. The pieces are coming together quickly. Several years ago a patient in Sweden received a prosthesis grafted to the bone in his arm and controlled by local neural signals. Last summer I saw Gregoire Courtine show video of a monkey implanted with microfabricated neural bridge that spanned a severed spinal cord; flip a switch on and the monkey could walk, flip it off and the monkey was lame. Just this month came news of an implanted cortical electrode array used to directly control a robot arm. Now, imagine you have something like this implanted in your spine or head, so that you can walk or use an arm, and you find that the manufacturer was careless about security. Oops. You'll have just woken up — unpleasantly — in a William Gibson novel. And you won't be alone. Given the massive medical need, followed closely by the demand for augmentation, we can expect rapid proliferation of these devices and accompanying rapid proliferation of security flaws, even if today they are one-offs. But that is the point; as Gibson has famously observed, "The future is already here — it's just not evenly distributed yet."

When — when — cortical modems become an evenly distributed human augmentation, they will inevitably come with memory and computational power that exceeds the wetware they are attached to. (Otherwise, what would be the point?) They will expand the capacity of all who receive them. They will be used as any technology is, for good and ill. Which means they will be targets of interest by law enforcement and intelligence agencies. Judges will be grappling with this for decades: where does the device stop and the human begin? ("Not guilt by reason of hacking, your honor." "I heard voices in my head.") And these devices will also come with security flaws that will expose the human brain to direct influence from attackers. Some of those flaws will be accidents, bugs, zero-days. But how will we feel about back doors built in to allow governments to pursue criminal or intelligence investigations, back doors that lead directly into our brains? I am profoundly unimpressed by suggestions that any government could responsibly use or look after keys to any such back door.

There are other incredibly interesting questions here, though they all lead to the same place. For example, would neural augmentation count as a medical device? If so, what does the testing look like? If not, who will be responsible for guaranteeing safety and security? And I have to wonder, given the historical leakiness of backdoors, if governments insist on access to these devices who is going to want to accept liability inherent in protecting access to customers' brains? What insurance or reinsurance company would issue a policy indemnifying a cortical modem with a known, built-in security flaw? Undoubtably an insurance policy can be written that exempts governments from responsibility for the consequences of using a backdoor, but how can a government or company guarantee that no one else will exploit the backdoor? Obviously, they can do no such thing. Neural interfaces will have to be protected by maximum security, otherwise manufacturers will never subject themselves to the consequent product liability.

Which brings us back to today, and the precedent set by Apple in refusing to make it easy for the FBI to hack an iPhone. If all this talk of backdoors and golden keys by law enforcement and politicians moves forward to become precedent by default, or is written into law, we risk building security holes into even more devices. Eventually, we will become subject to those security holes in increasingly uncomfortable, personal ways. That is why it is important to support Tim Cook as he defends your brain.

 

70 Years After Hiroshima: "No government is well aware of the economic importance of biotechnology"

I was recently interviewed by Le Monde for a series on the impact of Hiroshima on science and science policy, with a particular focus on biotechnology, synthetic biology, and biosecurity. Here is the story in French. Since the translation via Google is a bit cumbersome to read, below is the English original.

Question 1

On the 16 of July 1945, after the first nuclear test at large scale in New Mexico (called trinity) the American physicist Kenneth Bainbridge, head of the shooting, told Robert Oppenheimer, head of the Manhattan Project, "Now we are all sons of bitches ".

In your discipline, do you feel that the time the searchers might have the same revelation has been reached ? Will it be soon?

I think this analogy does not apply to biotechnology. It is crucially important to distinguish between weapons developed in a time of war and the pursuit of science and technology in a time of peace. Over the last thirty years, biotechnology has emerged as a globally important technology because it is useful and beneficial. 

The development and maintenance of biological weapons is internationally outlawed, and has been for decades. The Trinity test, and more broadly the Manhattan Project, was a response to what the military and political leaders of the time considered an existential threat. These were actions taken in a time of world war. The scientists and engineers who developed the U.S. bombs were almost to a person ambivalent about their roles – most saw the downsides, yet were also convinced of their responsibility to fight against the Axis Powers. Developing nuclear weapons was seen as imperative for survival.

The scale of the Manhattan Project (both in personnel and as a fraction of GDP) was unprecedented, and remains so. In contrast to the exclusive governmental domain of nuclear weapons, biotechnology has been commercially developed largely with private funds. The resulting products – whether new drugs, new crop traits, or new materials – have clear beneficial value to our society.

Question 2

Do you have this feeling in other disciplines? Which ones ? Why?

No. There is nothing in our experience like the Manhattan Project and nuclear weapons. It is easy to point to the participants’ regrets, and to the long aftereffects of dropping the bomb, as a way to generate debate about, and fear of, new technologies. The latest bugaboos are artificial intelligence and genetic engineering. But neither of these technologies – even if they can be said to qualify as mature technologies – is even remotely as impactful as nuclear weapons.

Question 3

What could be the impact of a "Hiroshima" in your discipline?

In biosecurity circles, you often hear discussion of what would happen if there were “an event”. It is often not clear what that event might be, but it is presumed to be bad. The putative event could be natural or it could be artificial. Perhaps the event might kill many people as Hiroshima. (Though that would be hard, as even the most deadly organisms around today cannot wipe out populated cities in an instant.) Perhaps the event would be the intentional use of a biological weapon, and perhaps that weapon would be genetically modified in some way to enhance its capabilities. This would obviously be horrible. The impact would depend on where the weapon came from, and who used it. Was it the result of an ongoing state program? Was it a sample deployed, or stolen, from discontinued program? Or was it built and used by a terrorist group? A state can be held accountable by many means, but we are finding it challenging to hold non-state groups to account. If the organism is genetically modified, it is possible that there will be pushback against the technology. But biotechnology is producing huge benefits today, and restrictions motivated by the response to an event would reduce those benefits. It is also very possible that biotechnology will be the primary means to provide remedies to bioweapons (probably vaccines or drugs), in which case an event might wind up pushing the technology even faster.

Question 4

After 1945, physicists, including Einstein, have committed an ethical reflection on their own work. has your discipline done the same ? is it doing the same today ?

Ethical reflection has been built into biotechnology from its origins. The early participants met at Asilomar to discuss the implications of their work. Today, students involved in the International Genetically Engineered Machines (iGEM) competition are required to complete a “policy and practices” (also referred to as “ethical, legal, and social implications” (ELSI)) examination of their project. This isn’t window dressing, by any means. Everyone takes it seriously. 

Question 5

Do you think it would be necessary to rase the public awarereness about the issues related to your work?

Well, I’ve been writing and speaking about this issue for 15 years, trying to raise awareness of biotechnology and where it is headed. My book, “Biology is Technology”, was specifically aimed at encouraging public discussion. But we definitely need to work harder to understand the scope and impact of biotechnology on our lives. No government measures very well the size of the biotechnology industry – either in terms of revenues or in terms of benefits – so very few people understand how economically pervasive it is already. 

Question 6

What is, according to you, the degree of liberty of scientists face to political and industrial powers that will exploit the results of the scientific works?

Scientists face the same expectation of personal responsibility as every other member of the societies to which they belong. That’s pretty simple. And most scientists are motivated by ideals of truth, the pursuit of knowledge, and improving the human condition. That is one reason why most scientists publish their results for others to learn from. But it is less clear how to control scientific results after they are published. I would turn your question in another direction, and say politicians and industrialists should be responsible for how they use science, rather than putting this all on scientists. If you want to take this back to the bomb, the Manhattan Project was a massive military operation in a time of war, implemented by both government and the private sector. It relied on science, to be sure, but it was very much a political and industrial activity – you cannot divorce these two sides of the Project.

Question 7

Do you think about accurate measures [?] to prevent further Hiroshima?

I constantly think about how to prevent bad things from happening. We have to pay attention to how new technologies are developed and used. That is true of all technologies. For my part, I work domestically and internationally to make sure policy makers understand where biotechnology is headed and what it can do, and also to make sure it is not misused. 

But I think the question is rather off target. Bombing Hiroshima was a conscious decision made by an elected leader in a time of war. It was a very specific sort of event in a very specific context. We are not facing any sort of similar situation. If the intent of the question is to make an analogy to intentional use of biological weapons, these are already illegal, and nobody should be developing or storing them under any circumstances. The current international arms control regime is the way to deal with it. If the intent is to allude to the prevention of “bad stuff”, then this is something that every responsible citizen should be doing anyway. All we can do is pay attention and keep working to ensure that technologies are not used maliciously.

Brewing Bad Biosecurity Policy

Last week brought news of a truly interesting advance in porting opioid production to yeast. This is pretty cool science, because it involves combining enzymes from several different organisms to produce a complex and valuable chemical, although no one has yet managed to integrate the whole synthetic pathway in microbes. It is also potentially pretty cool economics, because implementing opiate production in yeast should dramatically lower the price of a class of important pain medications to a point that developing countries might finally be able to afford.

Alongside the scientific article was a Commentary – with images of drug dens and home beer brewing – explicitly suggesting that high doses of morphine and other addictive narcotics would soon be brewed at home in the garage. The text advertised “Home-brew opiates” – wow, just like beer! The authors of the Commentary used this imagery to argue for immediate regulation of 1) yeast strains that can make opioids (even though no such strains exist yet), and 2) the DNA sequences that code for the opioid synthesis pathways. This is a step backward for biosecurity policy, by more than a decade, because the proposal embraces measures known to be counterproductive for security.

The wrong recipe.

I'll be very frank here – proposals like this are deep failures of the science policy enterprise. The logic that leads to “must regulate now!” is 1) methodologically flawed and 2) ignores data we have in hand about the impacts of restricting access to technology and markets. In what follows, I will deal in due course with both kinds of failures, as well as looking at the predilection to assume regulation and restriction should be the primary policy response to any perceived threat.

There are some reading this who will now jump to “Carlson is yet again saying that we should have no regulation; he wants wants everything to be available to anyone.” This is not my position, and never has been. Rather, I insist that our policies be grounded in data from the real world. And the real world data we have demonstrates that regulation and restriction often cause more harm than good. Moreover, harm is precisely the impact we should expect by restricting access to democratized biological technologies. What if even simple analyses suggests that proposed actions are likely to make things worse? What if the specific policy actions recommended in response to a threat have already been shown to exacerbate damage from the threat? That is precisely the case here. I am constantly confronted with people saying, "That's all very well and good, but what do you propose we do instead?" The answer is simple: I don't know. Maybe nothing. Maybe there isn't anything we can do. But for now, I just want us to not make things worse. In particular I want to make sure we don't screw up the emerging bioeconomy by building in perverse incentives for black markets, which would be the worst possible development for biosecurity.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don't know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This brings me to the set up. Several news pieces (e.g., the NYT, Buzzfeed) succinctly pointed out that the “home-brew” language was completely overblown and inflammatory, and that the Commentary largely ignored both the complicated rationale for producing opioids in yeast and the complicated benefits of doing so. The Economist managed to avoid getting caught up in discussing the Commentary, remaining mostly focussed on the science, while in the last paragraph touching on the larger market issues and potential future impacts of “home brew opium” to pull the economic rug out from under heroin cartels. (Maybe so. It's an interesting hypothesis, but I won't have much to say about it here.) Over at Biosecu.re, Piers Millet – formerly of the Biological Weapons Convention Implementation Support Unit – calmly responded to the Commentary by observing that, for policy inspiration, the authors look backward rather than forward, and that the science itself demonstrates the world we are entering requires developing completely new policy tools to deal with new technical and economic realities.

Stanford's Christina Smolke, who knows a thing or two about opioid production in yeast, observed in multiple news outlets that getting yeast to produce anything industrially at high yields is finicky to get going and then hard to maintain as a production process. It's relatively easy to produce trace amounts of lots of interesting things in microbes (ask any iGEM team); it is very hard and very expensive to scale up to produce interesting amounts of interesting things in microbes (ask any iGEM team). Note that we are swimming in data about how hard this is to do, which is an important part of this story. In addition to the many academic examples of challenges in scaling up production, the last ten years are littered with startups that failed at scale up. The next ten years, alas, will see many more.

Even with an engineered microbial strain in hand, it can be extraordinarily hard to make a microbe jump through the metabolic and fermentation hoops to produce interesting/useful quantities of a compound. And then transferring that process elsewhere is very frequently its own expensive and difficult effort. It is not true that you can just mail a strain and a recipe from one place to another and automatically get the same result. However, it is true that all this will get easier over time, and many people are working on reproducible process control for biological production.

That future looks amazing. I've written many times about how the future of the economy looks like beer and cows – in other words, that our economy will inevitably be based on distributed biological manufacturing. But that is the future: i.e., not the present. Nor is it imminent. I truly wish it were imminent, but it is not. Whole industries exist to solve these problems, and much more money and effort will be spent before we get there. The economic drivers are huge. Some of the investments made by Bioeconomy Capital are, in fact, aimed at eventually facilitating distributed biological manufacturing. But, if nothing else, these investments have taught me just how much effort is required to reach that goal. If anybody out there has a credible plan to build the Cowborg or to microbrew chemicals and pharmaceuticals as suggested by the Commentary, I will be your first investor. (I said “credible”! Don't bother me otherwise.) But I think any sort of credible plan is years away. For the time being, the only thing we can expect to brew like beer is beer.

FBI Supervisory Special Agent Ed You makes great use of the “brewing bad” and “baking bad” memes, mentioned in the Commentary, in talking to students and professionals alike about the future of drug production. But this is in the context of taking personal responsibility for your own science and for speaking up when you see something dangerous. I've never heard Ed say anything about increasing surveillance and enforcement efforts as the way forward. In fact, in the Times piece, Ed specifically says, “We’ve learned that the top-down approach doesn’t work.” I can't say exactly why Ed chose that turn of phrase, but I can speculate that it is based 1) on his own experience as a professional bench molecular biologist, 2) the catastrophically bad impacts of the FBI's earlier arrests and prosecutions of scientists and artists for doing things that were legal, and 3) the official change in policy from the White House and National Security Council away from suppression and toward embracing and encouraging garage biology. The standing order at the FBI is now engagement. In fact, Ed You's arrival on the scene was coincident with any number of positive policy changes in DC, and I am happy to give him all the credit I can. Moreover, I completely agree with Ed and the Commentary authors that we should be discussing early on the implications of new technologies, an approach I have been advocating for 15 years. But I completely disagree with the authors that the current or future state of the technology serves as an indicator of the need to prepare some sort of regulatory response. We tried regulating fermentation once before; that didn't work out so well [1]. 

Badly baked regulatory policy.

So now we're caught up to about the middle of the Commentary. At this point, the story is like other such policy stories. “Assume hypothetical thing is inevitable: discuss and prepare regulation.” And like other such stories, here is where it runs off the rails with a non sequitur common in policy work. Even if the assumption of the thing's inevitability is correct (which is almost always debatable), the next step should be to assess the impact of the thing. Is it good, or is it bad? (By a particular definition of good and bad, of course, but never mind that for now.) Usually, this question is actually skipped and the thing is just assumed to be bad and in need of a policy remedy, but the assumption of badness, breaking or otherwise, isn't fatal for the analysis.

Let's say it looks bad – bad, bad, bad – and the goal of your policy is to try to either head it off or fix it. First you have to have some metric to judge how bad it is. How many people are addicted, or how many people die, or how is the crime rate affected? Just how bad is it breaking? Next – and this is the part the vast majority of policy exercises miss – you have to try to understand what happens in the absence of a policy change. What is the cost of doing nothing, of taking no remediating action? Call this the null hypothesis. Maybe there is even a benefit to doing nothing. But only now, after evaluating the null hypothesis, are you in a position to propose remedies, because only now you have a metric to compare costs and benefits. If you leap directly to “the impacts of doing nothing are terrible, so we must do something, anything, because otherwise we are doing nothing”, then you have already lost. To be sure, policy makers and politicians feel that their job is to do something, to take action, and that if they are doing nothing then they aren't doing their jobs. That is just a recipe for bad policy. Without the null hypothesis, your policy development is a waste of time and, potentially, could make matters worse. This happens time and time again. Prohibition, for example, was exactly this sort of failure, and cost much more than it benefited, which is why it was considered a failure [2].

We keep making the same mistake. We have plenty of data and reporting, courtesy of the DEA, that the ongoing crackdown on methamphetamine production has created bigger and blacker markets, as well as mayhem and violence in Mexico, all without much impact on domestic drug use. Here is the DEA Statistics & Facts page – have a look and then make up your own mind.

I started writing about the potential negative impacts of restricting access to biological technologies in 2003 (PDF), including the likely emergence of black markets in the event of overregulation. I looked around for any data I could find on the impacts of regulating democratized technologies. In particular, I happened upon the DEA's first reporting of the impacts of the then newly instituted crackdown on domestic methamphetamine production and distribution. Even in 2003, the DEA was already observing that it had created bigger, blacker markets – that are by definition harder to surveil and disrupt – without impacting meth use. The same story has played out similarly in cocaine production and distribution, and more recently in the markets for “bath salts”, aka “legal highs”

That is, we have multiple, clear demonstrations that, rather than improving the world, restricting access to distributed production can instead cause harm. But, really, when has this ever worked? And why do people think going down the same path in the future will lead anywhere else? I am still looking for data – any data at all – that supports the assertion that regulating biological technologies will have any different result. If you have such data, bring it. Let's see it. In that absence of that data, policy proposals that lead with regulation and restriction are doomed to repeat the failures of the past. It has always seemed to me like a terrible idea to transfer such policies over to biosecurity. Yet that is exactly what the Commentary proposes.

Brewing black markets.

The fundamental problem with the approach advocated in the Commentary is that security policies, unlike beer brewing, do not work equally well across all technical and economic scales. What works in one context will not work in another. Nuclear weapons can be secured by guns, gates, and guards because they are expensive to build and the raw materials are hard to come by, so heavy touch regulation works just fine. There are some industries – as it happens, beer brewing – where only light touch regulation works. In the U.S., we tried heavy touch regulation in the form of Prohibition, and it failed miserably, creating many more problems than it solved. There are other industries, for example DNA and gene synthesis, in which even light touch regulations are a bad idea. Indeed, light touch regulation of has already created the problem it was supposed to prevent, namely the existence of DNA synthesis providers that 1) intentionally do not screen their orders and 2) ship to countries and customers that are on unofficial black lists.

For those who don't know this story: In early 2013, the International Council for the Life Sciences (ICLS) convened a meeting in Hong Kong to discuss "Codes of Conduct" for the DNA synthesis industry, namely screening orders and paying attention to who is doing the ordering. According to various codes and guidelines promulgated by industry associations and the NIH, DNA synthesis providers are supposed to reject orders that are similar to sequences that code for pathogens, or genes from pathogens, and it is suggested that they do not ship DNA to certain countries or customers (the unofficial black list). Here is a PDF of the meeting report; be sure to read through Appendix A.

The report is fairly anodyne in describing what emerged in discussions. But people who attended have since described in public the Chinese DNA synthesis market as follows. There are 3 tiers of DNA providers. The first tier is populated with companies that comply with the various guidelines and codes promulgated internationally because this tier serves international markets. There is a second tier that appears to similarly comply, because while they serve primarily the large internal market these companies have aspirations of also serving the international market. There is a third tier that exists specifically to serve orders from customers seeking ways around the guidelines and codes. (One company in this tier was described to me as a "DNA shanty", with the employees living over the lab.) Thus the relatively light touch guidelines (which are not laws) have directly incentivized exactly the behavior they were supposed to prevent. This is not a black market, per se, and cannot be accurate described as illegal, so let's call it a "grey market".

I should say here that this is entirely consistent with my understanding of biotech in China. In 2010, I attended a warm up meeting for the last round of BWC negotiations. After that meeting, I chatted with one of the Chinese representatives present, hoping to gain a little bit of insight into the size of the Chinese bioeconomy and the state of the industry. My query was met with frank acknowledgment that the Chinese government isn't able to keep track of the industry, does't know how many companies are active, or how many employees they have, or what they are up to, and so doesn't hold out much hope of controlling the industry. I covered this a bit in my 2012 Biodefense Net Assessment report for DHS. (If anyone has any new insight into the Chinese biotech industry, I am all ears.) Not that the U.S. or Europe is any better in this regard, as our mechanisms for tracking the biotech industry are completely dysfunctional, too. There could very well be DNA synthesis providers operating elsewhere that don't comply with the recommended codes of conduct: we have no real means of broadly surveying for this behavior. There are no physical means either to track it remotely or to control it.

I am a little bit sensitive about the apparent emergence of the DNA synthesis grey market, because I warned for years in print and in person that DNA screening would create exactly this outcome. I was condescendingly told on many occasions that it was foolish to imagine a black market for DNA. And then we have to do something, right? But it was never very complicated to think this through. DNA is cheap, and getting cheaper. You need this cheap DNA as code to build more complicated, more valuable things. Ergo, restrictions on DNA synthesis will incentivize people to seek, and to provide, DNA outside any control mechanism. The logic is pretty straightforward, and denying it is simply willful self-deception. Regulation of DNA synthesis will never work. In the vernacular of the day: because economics. To make it even simpler: because humans.

So the idea that people are still suggesting proscription of certain DNA sequences is a viable route to security just rankles. And it is demonstrably counterproductive. The restrictions incentivize the bad behavior they are supposed to prevent, probably much earlier than might have happened otherwise. The take home message here is that not all industries are the same, because not all technologies are the same, and that our policy approaches should take into account these differences rather than papering over them. In particular, restricting access to information in our modern economy is a losing game. 

Where do we go from here?

We are still at the beginning of biotech. This is the most important time to get it right. This is the most important time not to screw up and make things worse. And it is important that we are at the beginning, because things are not yet screwed up.

Conversely, we are well down the road in developing and deploying drug policies, with much damage done. To be sure, despite the accumulated and ongoing costs, you have to acknowledge that it is not at all clear that suddenly legalizing drugs such as meth or cocaine would be a positive step. I am not in any way making that argument. But it is abundantly clear that drug enforcement activities have created the world we live in today. Was there an alternative? If the DEA had been able to do cost/benefit analysis of the impacts of its actions – that is, predict the emergence of DTOs and their role in production, trafficking, and violence – would the policy response 15 years ago have been any different? If Nixon had more thoughtfully considered even what was known 50 years about about the impacts of proscription, would he have launched the war on drugs? That is a hard question, because drug policy is clearly driven more by stories and personal politics than by facts. I am inclined to think the present drug policy mess was inevitable. Even with the DEA's self-diagnosed role in creating and sustaining DTOs, the national conversation is still largely dominated by “the war on drugs”. And thus the first reaction to the prospect of microbial narcotics production is to employ strategies and tactics that have already failed elsewhere. I would hate to think we are in for a war on microbes, because that is doomed to failure.

But we haven't yet made all those mistakes with biological technologies. I continue to hope that, if nothing else, we will avoid making things worse by rejecting policies we already know won't work. 

Notes:

[1] Pause here to note that even this early in the set up, the Commentary conflates via words and images the use of yeast in home brew narcotics with centralized brewing of narcotics by cartels. These are two quite different, and are perhaps mutually exclusive, technoeconomic futures. Drug cartels very clearly have the resources to develop technology. Depending on whether you listen to the U.S. Navy or the U.S. Coast Guard, either 30% or 80% of the cocaine delivered to the U.S. is transported at some point in semisubmersible cargo vessels or in fully submersible cargo submarines. These 'smugglerines', if you will, are the result of specific technology development efforts directly incentivized by governmental interdiction efforts. Similarly, if cartels decide that developing biological technologies suits their business needs, they are likely to do so. And cartels certainly have incentives to develop opioid-producing yeast, because fermentation usually lowers the cost of goods between 50% and 90% compared to production in plants. Again, cartels have the resources, and they aren't stupid. If cartels do develop these yeast strains, for competitive reasons they certainly won't want anyone else to have them. Home brew narcotics would further undermine their monopoly.

[2] Prohibition was obviously the result of a complex socio-political situation, just as was its repeal. If you want a light touch look at the interaction of the teetotaler movement, the suffragette movement, and the utility of Prohibition in continued repression of freed slaves after the Civil War, check out Ken Burns's “Prohibition” on Netflix. But after all that, it was still a dismal failure that created more problems than it solved. Oh, and Prohibition didn't accomplish its intended aims. Anheuser-Busch thrived during those years. Its best selling products at the time were yeast and kettles (see William Knoedleseder's Bitter Brew)...

Announcing Bioeconomy Capital

I am pleased to announce the launch of Bioeconomy Capital. Our investments so far are:

  • Riffyn, which is building software that provides experimental process design and analytics software to improve reproducibility and tech transfer in life science and materials R&D;
  • Synthace, which is increasing the reliability, quality, and scale of biological science;
  • RoosterBio, which is is creating exponential advances in stem cell manufacturing to provide raw materials for cell-based therapies, biofabrication, and cellular ink for 3D BioPrinting.

Biosecurity is Everyone's Business (Part 2)

(Here is Part 1.)

Part 2. From natural security to neural security

Humans are fragile. For most of history we have lived with the expectation that we will lose the use of organs, and some of us limbs, as we age or suffer injury. But that is now changing. Prostheses are becoming more lifelike and more useful, and replacement organs have been used to save lives and restore function. But how robust are the replacement parts? The imminent prospect of technological restoration of human organs and limbs lost to injury or disease is cause to think carefully about increasing both our biological capabilities and our technological fragilities.

Technology fails us for many reasons. A particular object or application may be poorly designed or poorly constructed. Constituent materials may be faulty, or maintenance may be shoddy. Failure can result from inherent security flaws, which can be exploited directly by those with sufficient technical knowledge and skill. Failure can also be driven by clever and conniving exploits of the overall system that focus on its weakest link, almost always the human user, by inducing them to make a mistake or divulge critical information. Our centuries of experience and documentation of such failures should inform our thinking about the security of emerging technologies, particularly as we begin to fuse biology with electronic systems. The growing scope of biotechnology will therefore require constant reassessment of what vulnerabilities we are introducing through that expansion. Examining the course of other technologies provides some insight into the future of biology.

We carry powerful computers in our pockets, use the internet to gather information and access our finances, and travel the world in aircraft that are often piloted and landed by computers. We are told we can trust this technology with our financial information, our identities and social networks, and, ultimately, our lives. At the same time, technology is constantly shown to be vulnerable and fragile at a non-trivial rate -- resulting in identity theft, financial loss, and sometimes personal injury and death. We embrace technology despite well-understood risks; automobiles, electricity, fossil fuels, automation, and bicycles all kill people every day in predictable numbers. Yet we continue to use technology, integrating it further into multiple arenas in our lives, because we decide that the benefits outweigh risks.

Healthcare is one arena in which risks are multiplying. The IT security community has for some years been aware of network vulnerabilities in medical devices such as pacemakers and implantable defibrillators. The ongoing integration of networked medical devices in health care settings, an integration that is constantly introducing both new capabilities and new vulnerabilities, is now the focus of extensive efforts to improve security. The impending introduction of networked, semi-autonomous prostheses raises obvious similar concerns. Wi-fi enabled pacemakers and implantable defibrillators are just the start, as soon we will see bionic arms, legs, and eyes with network connections that allow performance monitoring and tuning.

Eventually, prostheses will not simply restore "human normal" capabilities, they will also augment human performance. I learned recently that DARPA explicitly chose to limit the strength of its robotic arm, but that can't last: science fiction, super robotic strength is coming. What happens when hackers get ahold of this technology? How will people begin to modify themselves and their robotic appendages? And, of course, the flip side of having enhanced physical capabilities is having enhanced vulnerabilities. By definition, tuning can improve or degrade performance, and this raises an important security question: who holds the password for your shiny new arm? Did someone remember to overwrite the factory default password? Is the new password susceptible to a dictionary attack? The future brings even more concerns.  Control connections to a prosthesis are bi-directional and, as the technology improves, ever better neural interfaces will eventually jack these prostheses directly into the brain. "Tickling" a robotic limb could take on a whole new meaning, providing a means to connect various kinds of external signals to the brain in new ways.

Beyond limbs, we must also consider neural connections that serve to open entirely novel senses. It is not a great leap to envision a wide range of ensuing digital-to-neural input/output devices. These technologies are evolving at a rapid rate, and through them we are on the cusp of opening up human brains to connections with a wide range of electromechanical hardware capabilities and, indeed, all the information on the internet.

Just this week saw publication of a cochlear implant that delivers a gene therapy to auditory neurons, promoting the formation of electrical connections with the implant and thereby dramatically improving the hearing response of test animals. We are used to the idea of digital music files being converted by speakers into sound waves, which enter the brain through the ear. But the cochlear implant is basically an ethernet connection wired to your auditory nerve, which in principal means any signal can be piped into your brain. How long can it be before we see experiments with a cochlear (or other) implant that enables direct conversion of arbitrary digital information into neural signals? At that point, "hearing" might extend into every information format. So, again we must ask, who holds the password to your brain implant

Hacking the Bionic Man

As this technology is deployed in the population it is clear that there can be no final and fixed security solution. Most phone and computer users are now all too aware that new hardware, firmware, and operating systems always introduce new kinds of risks and threats. The same will be true of prostheses. The constant rat race to chase down security holes in new products upgrades will soon extend directly into human brains. As more people are exposed to medical device vulnerabilities, security awareness and improvement must become an integrated part of medical practice. This discussion can be easily extended to potential vulnerabilities that will arise from the inevitable integration into human bodies of not just electromechanical devices, but of ever more sophisticated biological technologies. The exploration of prosthesis security, loosely defined, gives some indication of the scope of the challenge ahead.

The class of things we call prostheses will soon expand beyond electromechanical devices to encompass biological objects such as 3D printed tissues and lab-grown organs. As these cell-based therapies begin to enter human clinical trials, we must assess the security of both the therapies themselves and the means used to create and administer them. If replacement organs and tissues are generated from cells derived from donors, what vulnerabilities do the donors have? How are those donor vulnerabilities passed along to the recipients? Yes, you have an immune system that does wonders most of the time. But are your natural systems up to the task of handling the biosecurity of augmented organs?

What does security even mean in this context? In addition to standard patient work-ups, should we begin to fully sequence the genomes of donor tissues, first to identify potential known health issues, and then to build a database that can be re-queried as new genetic links to disease are discovered? Are there security holes in the 3D printers and other devices used to manipulate cells and tissues? What are the long term security implications of deploying novel therapeutic tissues in large numbers of military and civilian personnel? What are the long-term security implications of using both donor and patient tissue as seeds of induced pluripotent stem cells, or of differentiating any stem cell line for use in therapies? Do we fully understand the complement of microbes and genomes that may be present in donor samples, or lying dormant in donor genomes, or that may be introduced via laboratory procedures and instruments used to process cells for use as therapies? What is the genetic security of a modified cell line or induced pluripotent stem cell? If there is a genetic modification embedded in your replacement heart tissue, where did the new DNA come from, and are you sure you know everything that it encodes? As with information technologies, we should expect that these new biological technologies will sometimes arrive with accidental vulnerabilities; they may also come with intentionally introduced back doors. The economic motivation to create new protheses, as well as to exploit vulnerabilities, will soon introduce market competition as a factor in biosecurity. 

Competition often drives perverse strategic decisions when it comes to security. Firms rush to sell hardware and software that are said to be secure, only to discover that constant updates are required to patch security holes. We are surrounded by products in endless beta. Worse yet, manufacturers have been known to sit on security holes in the naive hope that no one else will notice. Vendors sometimes appear no more literate about the security of hardware and software than are their customers. What will the world look like when eletromechanical and biological prostheses are similarly in constant states of upgrade? Who will you trust to build/print/grow a prosthesis? Are you going to place your faith in the FDA to police all these risks? (Really?) If you decide to instead place your faith in the market, how will you judge the trustworthiness of firms that sell aftermarket security solutions for your bionic leg or replacement liver?

The complexity of the task at hand is nearly overwhelming. Understanding the coming fusion of technologies will require competency in software, hardware, wetware, and security -- where are those skill sets being developed in a compatible, integrated manner? This just leads to more questions: Are there particular countries that will have a competitive advantage in this area? Are there particular countries that will be hotbeds of prosthesis malware creation and distribution?

The conception of security, whether of individuals or nation states, is going to change dramatically as we become ever more economically dependent upon the market for biological technologies. Given the spreading capability to participate and innovate in technology development, which inevitably amplifies the number and effect of vulnerabilities of all kinds, I suspect we need to re-envision at a very high level how security works.

[Coming soon: Part 3.]

 

Biosecurity is Everyone's Business (Part 1)

Part 1. The ecosystem is the enterprise

We live in a society increasingly reliant upon the fruits of nature. We consume those fruits directly, and we cultivate them as feedstocks for fuel, industrial materials, and the threads on our backs. As a measure of our dependence, revenues in the bioeconomy are rising rapidly, demonstrating a demand for biological products that is growing much faster than the global economy as a whole.

This demand represents an enormous market pull on technology development, commercialization, and, ultimately, natural resources that serve as feedstocks for biological production. Consequently, we must assess carefully the health and longevity of those resources. Unfortunately, it is becoming ever clearer that the natural systems serving to supply our demand are under severe stress. We have been assaulting nature for centuries, with the heaviest blows delivered most recently. Nature, in the most encompassing sense of the word, has been astonishingly resilient in the face of this assault. But the accumulated damage has cracked multiple holes in ecosystems around the globe. There are very clear economic costs to this damage -- costs that compound over time -- and the cumulative damage now poses a threat to the availability of the water, farmland, and organisms we rely on to feed ourselves and our economy.

I would like to clarify that I am not predicting collapse, nor that we will run out of resources; rather, I expect new technologies to continue increasing productivity and improving the human condition. Successfully developing and deploying those technologies will, obviously, further increase our economic dependency on nature. As part of that growing dependency, businesses that participate in the bioeconomy must understand and ensure the security of feedstocks, transportation links, and end use, often at a global scale. Consequently, it behooves us to thoroughly evaluate any vulnerabilities we are building into the system so that we can begin to prepare for inevitable contingencies.

Revisiting the definition of biosecurity: from national security to natural security, and beyond

Last year John Mecklin at Bulletin of the Atomic Scientists asked me to consider the security implications of the emerging conversation (or, perhaps, collision) between synthetic biology and conservation biology. This conversation started at a meeting last April at the University of Cambridge, and is summarized in a recent article in Oryx. What I came up with for BAS was an essay that cast very broadly the need to understand threats to all of the natural systems we depend on. Quantifying the economic benefit of those systems, and the risk inherent in our dependence upon them, led me directly to the concept of natural security.

Here I want to take a stab at expanding the conversation further. Rapidly rising revenues in the bioeconomy, and the rapidly expanding scope of application, must critically inform an evolving definition of biosecurity. In other words, because economic demand is driving technology proliferation, we must continually refine our understanding of what it is that we must secure and from where threats may arise.

Biosecurity has typically been interpreted as the physical security of individuals, institutions, and the food supply in the context of threats such as toxins and pathogens. These will, of course, continue to be important concerns: new influenza strains constantly emerge to cause human and animal health concerns; the (re?)emergent PEDS virus has killed an astonishing 10% of U.S. pigs this year alone; within the last few weeks there has been an alarming uptick in the number of human cases and deaths caused by MERS. Beyond these natural threats are pathogens created by state and non-state organizations, sometimes in the name of science and preparation for outbreaks, while sometimes escaping containment to cause harm. Yet, however important these events are, they are but pieces of a biosecurity puzzle that is becoming ever more complex.

Due to the large and growing contribution of the bioeconomy, no longer are governments concerned merely with the proverbial white powder produced in a state-sponsored lab, or even in a 'cave' in Afghanistan. Because economic security is now generally included in the definition of national security, the security of crops, drug production facilities, and industrial biotech will constitute an ever more important concern. Moreover, in the U.S., as made clear by the National Strategy for Countering Biological Threats(PDF), the government has established that encouraging the development and use of biological technologies in unconventional environments (i.e., "garages and basements") is central to national security. Consequently, the concept of biosecurity must comprise the entire value chain from academics and garage innovators, through production and use, to, more traditionally, the health of crops, farmanimals, and humans. We must endeavor to understand, and to buttress, fragility at every link in this chain.

Beyond the security of specific links in the bioeconomy value chain we must examine the explicit and implicit connections between them, because through our behavior we connect them. We transport organisms around the world; we actively breed plants, animals, and microbes; we create new objects with flaws; we emit waste into the world. It's really not that complicated. However, we often choose to ignore these connections because acknowledging them would require us to respect them, and consequently to behave differently. But that change in behavior must be the future of biosecurity. 

From an enterprise perspective, as we rely ever more heavily on biology in our economy, so must we comprehensively define 'biosecurity' to adequately encompass relevant systems. Vulnerabilities in those systems may be introduced intentionally or accidentally. An accidental vulnerability may lie undiscovered for years, as in the case of the recently disclosed Heartbleed hole in the OpenSSL internet security protocol, until it is identified, when it becomes a threat. The risk, even in open source software, is that the vulnerability may be identified by organizations which then exploit it before it becomes widely known. This is reported to be true of the NSA's understanding and exploitation of Heartbleed at least two years in advance of its recent public announcement. Our biosecurity challenge is to carefully, and constantly, assess how the world is changing and address shortcomings as we find them. It will be a transition every bit as painful as the one we are now experiencing for hardware and software security

(Here is Part 2.)

Using programmable inks to build with biology: mashing up 3D printing and biotech

Scientists and engineers around the globe dream of employing biology to create new objects. The goal might be building replacement organs, electronic circuits, living houses, or cowborgs and carborgs (my favorites) that are composed of both standard electromechanical components and novel biological components. Whatever the dream, and however outlandish, we are getting closer every day.

Looking a bit further down the road, I would expect organs and tissues that have never before existed. For example, we might be able to manufacture hybrid internal organs for the cowborg that process rough biomass into renewable fuels and chemicals. Both the manufacturing process and the cowborg itself might utilize novel genetic pathways generated in DARPA's Living Foundries program. The first time I came across ideas like the cowborg was in David Brin's short story "Piecework". I've pondered this version of distributed biological manufacturing for years, pursuing the idea into microbrewing, and then to the cowborg, the economics of which I am now exploring with Steve Aldrich from bio-era.

Yet as attractive and powerful as biology is as a means for manufacturing, I am not sure it is powerful enough. Other ways that humans build things, and that we build things that build things, are likely to be part of our toolbox well into the future. Corrosion-resistant plumbing and pumps, for example, constitute very useful kit for moving around difficult fluids, and I wouldn't expect teflon to be produced biologically anytime soon. Photolithography, electrodeposition, and robotics, now emerging in the form of 3D printing, enable precise control over the position of matter, though frequently using materials and processes inimical to biology. Humans are really good at electrical and mechanical engineering, and we should build on that expertise with biological components.

Let's start with the now hypothetical cowborg. The mechanical part of a cowborg could be robotic, and could look like Big Dog, or perhaps simply a standard GPS-guided harvester, which comes standard with air conditioning and a DVD player to keep the back-up human navigation system awake. This platform would be supplemented by biological components, initially tanks of microbes, that turn raw feedstocks into complex materials and energy. Eventually, those tanks might be replaced by digestive organs and udders that produce gasoline instead of milk, where the artificial udders are enabled by advances in genetics, microbiology, and bioprinting. Realizing this vision could make biological technologies part of literally anything under the sun. In a simple but effective application along these lines, the ESA is already using "burnt bone charcoal" as a protective coating on a new solar satellite.

But there is one persistent problem with this vision: unless it is dead and processed, as in the case of the charcoal spacecraft coating, biology tends not to stay where you put it. Sometimes this will not matter, such as with many replacement transplant organs that are obviously supposed to be malleable, or with similar tissues made for drug testing. (See the recent Economist article, "Printing a bit of me", this CBS piece on Alexander Seifalian's work at University College London, and this week's remarkable news out of Anthony Atala's lab.) Otherwise, cells are usually squishy, and they tend to move around, which complicates their use in fabricating small structures that require precise positioning. So how do you use biology to build structures at the micro-scale? More specifically, how do you get biology to build the structures you want, as opposed to the structures biology usually builds?

We are getting better at directing organisms to make certain compounds via synthetic biology, and our temporal control of those processes is improving. We are inspired by the beautiful fabrication mechanisms that evolution has produced. Yet we still struggle to harness biology to build stuff. Will biological manufacturing ever be as useful as standard machining is, or as flexible as 3D printing appears it will be? I think the answer is that we will use biology where it makes sense, and we will use other methods where they make sense, and that in combination we will get the best of both worlds. What will it mean when we can program complex matter in space and time using a fusion of electromechanical control (machining and printing) biochemical control (chemistry and genetics)? There are several recent developments that point the way and demonstrate hybrid approaches that employ the 3D printing of biological inks that subsequently display growth and differentiation.

Above is a slide I used at the recent SynBERC retreat in Berkeley. On the upper left, Organovo is now shipping lab-produced liver tissue for drug testing. This tissue is not yet ready for use in transplants, but it does display all the structural and biochemical complexity of adult livers. A bit further along in development are tracheas from Harvard Biosciences, which are grown from patient stem cells on 3D-printed scaffolds (Claudia Castillo was the first recipient of a transplant like this in 2007, though her cells were grown on a cadaver windpipe first stripped of the donor's cells). And then we have the paper on the right, which really caught my eye. In that publication, viruses on a 3D woven substrate were used to reprogram human cells that were subsequently cultured on that substrate. The green cells above, which may not look like much, are the result of combining 3D fabrication of non-living materials with a biological ink (the virus), which in combination serve to physically and genetically program the differentiation and growth of mammalian cells, in this case into cartilage. That's pretty damn cool.

Dreams of building with biology

Years ago, during the 2003 "DARPA/ISAT Synthetic Biology Study", we spent considerable time discussing whether biology could be used to rationally build structures like integrated circuits. The idea isn't new: is there a way to get cells to build structures at the micro- or nano-scale that could help replace photolithography and other 2D patterning techniques used to build chips? How can humans make use of cells -- lovely, self-reproducing factories -- to construct objects at the nanometer scale of molecules like proteins, DNA, and microtubules?

Cells, of course, have dimensions in the micron range, and commercial photolithography was, even in 2003, operating at about the 25 nanometer range (now at about 15 nm). The challenge is to program cells to lay down structures much smaller than they are. Biology clearly knows how to do this already. Cells constantly manufacture and use complex molecules and assemblies that range from 1 to 100 nm. Many cells move or communicate using extensions ("processes") that are only 10-20 nanometers in width but tens microns in length. Alternatively, we might directly use synthetic DNA to construct a self-assembling scaffold at the nano-scale and then build devices on that scaffold using DNA-binding proteins. DNA origami has come a long way in the last decade and can be used to build structures that span nanometers to microns, and templating circuit elements on DNA is old news. We may even soon have batteries built on scaffolds formed by modified, self-assembling viruses. But putting all this together in a biological package that enables nanometer-scale control of fabrication across tens of centimeters, and doing it as well as lithography, and as reproducibly as lithography, has thus far proved difficult.

Conversely, starting at the macro scale, machining and 3D printing work pretty well from meters down to hundreds of microns. Below that length scale we can employ photolithography and other microfabrication methods, which can be used to produce high volumes of inexpensive objects in parallel, but which also tend to have quite high cost barriers. Transistors are so cheap that they are basically free on a per unit basis, while a new chip fab now costs Intel about $10 billion.

My experiences working on different aspects of these problems suggest to me that, eventually, we will learn to exploit the strengths of each of the relevant technologies, just as we learn to deal with their weaknesses; through the combination of these technologies we will build objects and systems that we can only dream of today.

Squishy construction

Staring through a microscope at fly brains for hours on end provides useful insights into the difference between anatomy and physiology, between construction and function. In my case, those hours were spent learning to find a particular neuron (known as H1) that is the output of the blowfly motion measurement and computation system. The absolute location of H1 varies from fly to fly, but eventually I learned to find H1 relative to other anatomical landmarks and to place my electrode within recording range (a few tens of microns) on the first or second try. It's been long known that the topological architecture (the connectivity, or wiring diagram) of fly brains is identical between individuals of a given species, even as the physical architecture (the locations of neurons) varies greatly. This is the difference between physiology and anatomy.

The electrical and computational output of H1 is extremely consistent between individuals, which is what makes flies such great experimental systems for neurobiology. This is, of course, because evolution has optimized the way these brains work -- their computational performance -- without the constraint that all the bits and pieces must be in exactly the same place in every brain. Fly brains are constructed of squishy matter, but the computational architecture is quite robust. Over the last twenty years, humans have learned to grow various kinds of neurons in dishes, and to coax them into connecting in interesting ways, but it is usually very hard to get those cells to position themselves physically exactly where you want them, with the sort of precision we regularly achieve with other forms of matter.

Crystalline construction

The first semiconductor processing instrument I laid hands on in 1995 was a stepper. This critical bit of kit projects UV light through a mask, which contains the image of a structure or circuit, onto the surface of a photoresist-covered silicon wafer. The UV light alters the chemical structure of the photoresist, which after further processing eventually enables the underlying silicon to be chemically etched in a pattern identical to the mask. Metal or other chemicals can be similarly patterned. After each exposure, the stepper automatically shifts the wafer over, thereby creating an array of structures or circuits on each wager. This process enables many copies of a chip to be packed onto a single silicon wafer and processed in parallel. The instruments on which I learned to process silicon could handle ~10 cm diameter wafers. Now the standard is about 30 cm, because putting more chips on a wafer reduces marginal processing costs. But it isn't cheap to assemble the infrastructure to make all this work. The particular stepper I used (this very instrument, as a matter of fact), which had been donated to the Nanofabrication Facility at Cornell and which was ancient by the time I got to it, contained a quartz lens I was told cost about $1 million all by itself. The kit used in a modern chip fab is far more expensive, and the chemical processing used to fabricate chips is quite inimical to cells. Post-processing, silicon chips can be treated in ways that encourages cells to grow on them and even to form electrical connections, but the overhead to get to that point is quite high.

Arbitrary construction

The advent of 3D printers enables the reasonably precise positioning of materials just about anywhere. Depending on how much you want to spend, you can print with different inks: plastics, composites, metals, and even cells. This lets you put stuff where you want it. The press is constantly full of interesting new examples of 3D printing, including clothes, bone replacements, objects d'art, gun components, and parts for airplanes. As promising as all this is, the utility of printing is still limited by the step size (the smallest increment of the position of the print head) and the spot size (the smallest amount of stuff the print head can spit out) of the printer itself. Moreover, printed parts are usually static: once you print them, they just sit there. But these limitations are already being overcome by using more complex inks.

Hybrid construction

If the ink used in the printer has the capacity to change after it gets printed, then you have introduced a temporal dimension into your process: now you have 4D printing. Typically, 4D printing refers to objects whose shape or mechanical properties can be dynamically controlled after construction, as with these 3D objects that fold up after being printed as 2D objects. But beyond this, if you combine squishy, crystalline, and arbitrary construction, you get a set of hybrid construction techniques that allows programming matter from the nanoscale to the macroscale in both time and space.

Above is a slide from a 2010 DARPA study on the Future of Manufacturing, from a talk in which I tried to articulate the utility of mashing up 3D printing and biotech. We have already seen the first 3D printed organs, as described earlier. Constructed using inks that contain cells, even the initial examples are physiologically similar to natural organs. Beyond tracheas, printed or lab-growth organs aren't yet ready for general use as transplants, but they are already being used to screen drugs and other chemicals for their utility and toxicity. Inks could also consist of: small molecules (i.e. chemicals) that react with each other or the environment after printing; DNA and proteins that serve structural, functional (say, electronic), or even genetic roles after printing; viruses that form structures or are that are intended to interact biologically with later layers; cells that interact with each other or follow some developmental program defined genetically or by the substrate, as demonstrated in principle by the cartilage paper above.

The ability to program the three-dimensional growth and development of complex structures will have transformative impacts throughout our manufacturing processes, and therefore throughout our economy. The obvious immediate applications include patient-specific organs and materials such as leather, bone, chitin, or even keratin (think vat-grown ivory), that are used in contexts very different than we are used to today.

It is hard to predict where this is going, of course, but any function we now understand for molecules or cells can be included in programmable inks. Simple 2-part chemical reactions will eventually be common in inks, eventually transitioning to more complex inks containing multiple reactants, including enzymes and substrates. Eventually, programmable printer inks will employ the full complement of genes and biochemistry present in viruses, bacteria, and eukaryotic cells. Beyond existing genetics and biochemistry, new enzymes and genetic pathways will provide materials we have never before laid hands on. Within DARPA's Living Foundries program is the 1000 Molecules program, which recently awarded contracts to use biology to generate "chemical building blocks for accessing radical new materials that are impossible to create with traditional petroleum-based feedstocks".

Think about that for a moment: it turns out that of the entire theoretical space of compounds we can imagine, synthetic chemistry can only provide access to a relatively small sample. Biology, on the other hand, in particular novel systems of (potentially novel) enzymes, can be programmed to synthesize a much wider range of compounds. We are just learning how to design and construct these pathways; the world is going to look very different in ten years' time. Consequently, as these technologies come to fruition, we will learn to use new materials to build objects that may be printed at one length scale, say centimeters, and that grow and develop at length scales ranging from nanometers to hundreds of meters.

Just as hybrid construction that combines the features of printers and inks will enable manufacturing on widely ranging length scales, so will it give us access to a wide range of time scales. A 3D printer presently runs on fairly understandable human time scales of seconds to hours. For the time being, we are still learning how to control print heads and robotic arms that position materials, so they move fairly slowly. Over time, the print head will inevitably be able to move on time scales at least as short as milliseconds. Complex inks will then extend the reach of the fabrication process into the nanoseconds on the short end, and into the centuries on the long end.

I will be the first to admit that I haven't the slightest idea what artifacts made in this way will do or look like. Perhaps we will build/grow trees the size of redwoods that produce fruit containing libations rivaling the best wine and beer. Perhaps we will build/grow animals that languidly swim at the surface of the ocean, extracting raw materials from seawater and photosynthesizing compounds that don't even exist today but that critical to the future economy.

These examples will certainly prove hopelessly naive. Some problems will turn out to be harder than they appear today, and other problems will turn out to be much easier than they appear today. But the constraints of the past, including the oft-uttered phrase "biology doesn't work that way", do not apply. The future of engineering is not about understanding biology as we find it today, but rather about programming biology as we will build it tomorrow.

What I can say is that we are now making substantive progress in learning to manipulate matter, and indeed to program matter. Science fiction has covered this ground many times, sometimes well, sometimes poorly. But now we are doing it in the real world, and sketches like those on the slides above provide something of a map to figure out where we might be headed and what our technical capabilities will be like many years hence. The details are certainly difficult to discern, but if you step back a bit, and let your eyes defocus, the overall trajectory becomes clear.

This is a path that John von Neumann and Norbert Wiener set out on many decades ago. Physics and mathematics taught us what the rough possibilities should be. Chemistry and materials science have demonstrated many detailed examples of specific arrangements of atoms that behave physically in specific ways. Control theory has taught us both how organisms behave over time and how to build robots that behave in similar ways. Now we are learning to program biology at the molecular level. The space of the possible, of the achievable, is expanding on a daily basis. It is going to be an interesting ride.