Tim Cook is Defending Your Brain

Should the government have the right to troll through your thoughts and memories? That seems like a question for a "Minority Report" or "Matrix" future, but legal precedent is being set today. This is what is really at stake in an emerging tussle between Washington DC and Silicon Valley.

The Internets areall abuzz with Apple's refusal to hack an iPhone belonging to an accused terrorist. The FBI has served a court order on Apple, based on the All Writs Act of 1789, requiring Apple to break the lock that limits the number of times a passcode can be tried. Since law enforcement has been unable to crack the security of iOS on its own, it wants Apple to write special software to do the job. Here is Wired's summary. This NYT story has additional good background. The short version: should law enforcement and intelligence agencies be able to compel corporations to hack devices owned by citizens and entrusted with their sensitive information? 

Apple CEO Tim Cook published a letter saying no, thank you, because weakening the security of iPhones would be bad for his customers and "has implications far beyond the legal case at hand". Read Cook's letter; it is thoughtful. The FBI says it is just about this one phone and "isn't about trying to set a precedent," in the words of FBI Director James Comey. But this language is neither accurate nor wise — and it is important to say so.

Once the software is written, the U.S. government can hardly argue it will never be used again, nor that it will never be stolen off government servers. And since the point of the hack is to be able to push it onto a phone without consent (which is itself a backdoor that needs closing), this software would allow breaking the locks on any susceptible iPhone, anywhere. Many commentators have observed that any effort to hack iOS this once would facilitate repetitions, and any general weakening of smartphone security could easily be exploited by governments or groups less concerned about due process, privacy, or human rights. (And you do have to wonder whether Tim Cook's position here is influenced by his experience as a gay man, a demographic that has been persecuted, if not actually prosecuted, merely for thought and intent by the same organization now sitting on the other side of the table. He knows a thing or two about privacy.)  U.S. Senator Ron Wyden has a nice take on these issues. Yet while these are critically important concerns for modern life, they are shortsighted. There is much more at stake here than just one phone, or even the fate of a one particular company. The bigger, longer term issue is whether governments should have access to electronic devices that we rely on in daily life, particularly when those devices are becoming extensions of our bodies and brains. Indeed, these devices will soon be integrated into our bodies — and into our brains.

Hacking electronically-networked brains sounds like science fiction. That is largely because there has been so much science fiction produced about neural interfaces, Matrices, and the like. We are used to thinking of such technology as years, or maybe decades, off. But these devices are already a reality, and will only become more sophisticated and prevalent over the coming decades. Policy, as usual, is way behind.

My concern, as usual, is less about the hubbub in the press today and instead about where this all leads in ten years. The security strategy and policy we implement today should be designed for a future in which neural interfaces are commonplace. Unfortunately, today's politicians and law enforcement are happy to set legal precent that will create massive insecurity in just a few years. We can be sure that any precedent of access to personal electronic devices adopted today, particularly any precedent in which a major corporation is forced to write new software to hack a device, will be cited at least decades hence, when technology that connects hardware to our wetware is certain to be common. After all, the FBI is now proposing that a law from 1789 applies perfectly well in 2016, allowing a judge to "conscript Apple into government service", and many of our political representatives appear delighted to concur. A brief tour of current technology and security flaws sets the stage for how bad it is likely to get.

As I suggested a couple of years ago, hospital networks and medical devices are examples of existing critical vulnerabilities. Just in the last week hackers took control of computers and devices in a Los Angeles hospital, and only a few days later received a ransom to restore access and functionality. We will be seeing more of this. The targets are soft, and when attacked they have little choice but to pay when patients' health and lives are on the line. What are hospitals going to do when they are suddenly locked out of all the ventilators or morphine pumps in the ICU? Yes, yes, they should harden their security. But they won't be fully successful, and additional ransom events will inevitably happen. More patients will be exposed to more such flaws as they begin to rely more on medical devices to maintain their health. Now consider where this trend is headed: what sorts of security problems will we create by implanting those medical devices into our bodies?

Already on the market are cochlear implants that are essentially ethernet connections to the brain, although they are not physically configured that way today. An external circuit converts sound into signals that directly stimulate the auditory nerves. But who holds the password for the hardware? What other sorts of signals can be piped into the auditory nerve? This sort of security concern, in which networked electronics implanted in our bodies create security holes, has actually been with us for more than a decade. When serving as Vice President, Dick Cheney had the wireless networking on his fully-implanted heart defibrillator disabled because it was perceived as a threat. The device contained a test mode that could exploited to fully discharge the battery into the surrounding tissue. This might be called a fatal flaw. And it will only get worse.

DARPA has already limited the strength of a recently developed, fully articulated bionic arm to "human normal" precisely because the organization is worried about hacking. These prosthetics are networked in order to tune their function and provide diagnostic information. Hacking is inevitable, by users interested in modifications and by miscreants interested in mischief.

Not content to replace damaged limbs, within the last few months DARPA has announced a program to develop what the staff sometimes calls a "cortical modem". DARPA is quite serious about developing a device that will provide direct connections between the internet and the brain. The pieces are coming together quickly. Several years ago a patient in Sweden received a prosthesis grafted to the bone in his arm and controlled by local neural signals. Last summer I saw Gregoire Courtine show video of a monkey implanted with microfabricated neural bridge that spanned a severed spinal cord; flip a switch on and the monkey could walk, flip it off and the monkey was lame. Just this month came news of an implanted cortical electrode array used to directly control a robot arm. Now, imagine you have something like this implanted in your spine or head, so that you can walk or use an arm, and you find that the manufacturer was careless about security. Oops. You'll have just woken up — unpleasantly — in a William Gibson novel. And you won't be alone. Given the massive medical need, followed closely by the demand for augmentation, we can expect rapid proliferation of these devices and accompanying rapid proliferation of security flaws, even if today they are one-offs. But that is the point; as Gibson has famously observed, "The future is already here — it's just not evenly distributed yet."

When — when — cortical modems become an evenly distributed human augmentation, they will inevitably come with memory and computational power that exceeds the wetware they are attached to. (Otherwise, what would be the point?) They will expand the capacity of all who receive them. They will be used as any technology is, for good and ill. Which means they will be targets of interest by law enforcement and intelligence agencies. Judges will be grappling with this for decades: where does the device stop and the human begin? ("Not guilt by reason of hacking, your honor." "I heard voices in my head.") And these devices will also come with security flaws that will expose the human brain to direct influence from attackers. Some of those flaws will be accidents, bugs, zero-days. But how will we feel about back doors built in to allow governments to pursue criminal or intelligence investigations, back doors that lead directly into our brains? I am profoundly unimpressed by suggestions that any government could responsibly use or look after keys to any such back door.

There are other incredibly interesting questions here, though they all lead to the same place. For example, would neural augmentation count as a medical device? If so, what does the testing look like? If not, who will be responsible for guaranteeing safety and security? And I have to wonder, given the historical leakiness of backdoors, if governments insist on access to these devices who is going to want to accept liability inherent in protecting access to customers' brains? What insurance or reinsurance company would issue a policy indemnifying a cortical modem with a known, built-in security flaw? Undoubtably an insurance policy can be written that exempts governments from responsibility for the consequences of using a backdoor, but how can a government or company guarantee that no one else will exploit the backdoor? Obviously, they can do no such thing. Neural interfaces will have to be protected by maximum security, otherwise manufacturers will never subject themselves to the consequent product liability.

Which brings us back to today, and the precedent set by Apple in refusing to make it easy for the FBI to hack an iPhone. If all this talk of backdoors and golden keys by law enforcement and politicians moves forward to become precedent by default, or is written into law, we risk building security holes into even more devices. Eventually, we will become subject to those security holes in increasingly uncomfortable, personal ways. That is why it is important to support Tim Cook as he defends your brain.

 

Brewing Bad Biosecurity Policy

Last week brought news of a truly interesting advance in porting opioid production to yeast. This is pretty cool science, because it involves combining enzymes from several different organisms to produce a complex and valuable chemical, although no one has yet managed to integrate the whole synthetic pathway in microbes. It is also potentially pretty cool economics, because implementing opiate production in yeast should dramatically lower the price of a class of important pain medications to a point that developing countries might finally be able to afford.

Alongside the scientific article was a Commentary – with images of drug dens and home beer brewing – explicitly suggesting that high doses of morphine and other addictive narcotics would soon be brewed at home in the garage. The text advertised “Home-brew opiates” – wow, just like beer! The authors of the Commentary used this imagery to argue for immediate regulation of 1) yeast strains that can make opioids (even though no such strains exist yet), and 2) the DNA sequences that code for the opioid synthesis pathways. This is a step backward for biosecurity policy, by more than a decade, because the proposal embraces measures known to be counterproductive for security.

The wrong recipe.

I'll be very frank here – proposals like this are deep failures of the science policy enterprise. The logic that leads to “must regulate now!” is 1) methodologically flawed and 2) ignores data we have in hand about the impacts of restricting access to technology and markets. In what follows, I will deal in due course with both kinds of failures, as well as looking at the predilection to assume regulation and restriction should be the primary policy response to any perceived threat.

There are some reading this who will now jump to “Carlson is yet again saying that we should have no regulation; he wants wants everything to be available to anyone.” This is not my position, and never has been. Rather, I insist that our policies be grounded in data from the real world. And the real world data we have demonstrates that regulation and restriction often cause more harm than good. Moreover, harm is precisely the impact we should expect by restricting access to democratized biological technologies. What if even simple analyses suggests that proposed actions are likely to make things worse? What if the specific policy actions recommended in response to a threat have already been shown to exacerbate damage from the threat? That is precisely the case here. I am constantly confronted with people saying, "That's all very well and good, but what do you propose we do instead?" The answer is simple: I don't know. Maybe nothing. Maybe there isn't anything we can do. But for now, I just want us to not make things worse. In particular I want to make sure we don't screw up the emerging bioeconomy by building in perverse incentives for black markets, which would be the worst possible development for biosecurity.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don't know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This brings me to the set up. Several news pieces (e.g., the NYT, Buzzfeed) succinctly pointed out that the “home-brew” language was completely overblown and inflammatory, and that the Commentary largely ignored both the complicated rationale for producing opioids in yeast and the complicated benefits of doing so. The Economist managed to avoid getting caught up in discussing the Commentary, remaining mostly focussed on the science, while in the last paragraph touching on the larger market issues and potential future impacts of “home brew opium” to pull the economic rug out from under heroin cartels. (Maybe so. It's an interesting hypothesis, but I won't have much to say about it here.) Over at Biosecu.re, Piers Millet – formerly of the Biological Weapons Convention Implementation Support Unit – calmly responded to the Commentary by observing that, for policy inspiration, the authors look backward rather than forward, and that the science itself demonstrates the world we are entering requires developing completely new policy tools to deal with new technical and economic realities.

Stanford's Christina Smolke, who knows a thing or two about opioid production in yeast, observed in multiple news outlets that getting yeast to produce anything industrially at high yields is finicky to get going and then hard to maintain as a production process. It's relatively easy to produce trace amounts of lots of interesting things in microbes (ask any iGEM team); it is very hard and very expensive to scale up to produce interesting amounts of interesting things in microbes (ask any iGEM team). Note that we are swimming in data about how hard this is to do, which is an important part of this story. In addition to the many academic examples of challenges in scaling up production, the last ten years are littered with startups that failed at scale up. The next ten years, alas, will see many more.

Even with an engineered microbial strain in hand, it can be extraordinarily hard to make a microbe jump through the metabolic and fermentation hoops to produce interesting/useful quantities of a compound. And then transferring that process elsewhere is very frequently its own expensive and difficult effort. It is not true that you can just mail a strain and a recipe from one place to another and automatically get the same result. However, it is true that all this will get easier over time, and many people are working on reproducible process control for biological production.

That future looks amazing. I've written many times about how the future of the economy looks like beer and cows – in other words, that our economy will inevitably be based on distributed biological manufacturing. But that is the future: i.e., not the present. Nor is it imminent. I truly wish it were imminent, but it is not. Whole industries exist to solve these problems, and much more money and effort will be spent before we get there. The economic drivers are huge. Some of the investments made by Bioeconomy Capital are, in fact, aimed at eventually facilitating distributed biological manufacturing. But, if nothing else, these investments have taught me just how much effort is required to reach that goal. If anybody out there has a credible plan to build the Cowborg or to microbrew chemicals and pharmaceuticals as suggested by the Commentary, I will be your first investor. (I said “credible”! Don't bother me otherwise.) But I think any sort of credible plan is years away. For the time being, the only thing we can expect to brew like beer is beer.

FBI Supervisory Special Agent Ed You makes great use of the “brewing bad” and “baking bad” memes, mentioned in the Commentary, in talking to students and professionals alike about the future of drug production. But this is in the context of taking personal responsibility for your own science and for speaking up when you see something dangerous. I've never heard Ed say anything about increasing surveillance and enforcement efforts as the way forward. In fact, in the Times piece, Ed specifically says, “We’ve learned that the top-down approach doesn’t work.” I can't say exactly why Ed chose that turn of phrase, but I can speculate that it is based 1) on his own experience as a professional bench molecular biologist, 2) the catastrophically bad impacts of the FBI's earlier arrests and prosecutions of scientists and artists for doing things that were legal, and 3) the official change in policy from the White House and National Security Council away from suppression and toward embracing and encouraging garage biology. The standing order at the FBI is now engagement. In fact, Ed You's arrival on the scene was coincident with any number of positive policy changes in DC, and I am happy to give him all the credit I can. Moreover, I completely agree with Ed and the Commentary authors that we should be discussing early on the implications of new technologies, an approach I have been advocating for 15 years. But I completely disagree with the authors that the current or future state of the technology serves as an indicator of the need to prepare some sort of regulatory response. We tried regulating fermentation once before; that didn't work out so well [1]. 

Badly baked regulatory policy.

So now we're caught up to about the middle of the Commentary. At this point, the story is like other such policy stories. “Assume hypothetical thing is inevitable: discuss and prepare regulation.” And like other such stories, here is where it runs off the rails with a non sequitur common in policy work. Even if the assumption of the thing's inevitability is correct (which is almost always debatable), the next step should be to assess the impact of the thing. Is it good, or is it bad? (By a particular definition of good and bad, of course, but never mind that for now.) Usually, this question is actually skipped and the thing is just assumed to be bad and in need of a policy remedy, but the assumption of badness, breaking or otherwise, isn't fatal for the analysis.

Let's say it looks bad – bad, bad, bad – and the goal of your policy is to try to either head it off or fix it. First you have to have some metric to judge how bad it is. How many people are addicted, or how many people die, or how is the crime rate affected? Just how bad is it breaking? Next – and this is the part the vast majority of policy exercises miss – you have to try to understand what happens in the absence of a policy change. What is the cost of doing nothing, of taking no remediating action? Call this the null hypothesis. Maybe there is even a benefit to doing nothing. But only now, after evaluating the null hypothesis, are you in a position to propose remedies, because only now you have a metric to compare costs and benefits. If you leap directly to “the impacts of doing nothing are terrible, so we must do something, anything, because otherwise we are doing nothing”, then you have already lost. To be sure, policy makers and politicians feel that their job is to do something, to take action, and that if they are doing nothing then they aren't doing their jobs. That is just a recipe for bad policy. Without the null hypothesis, your policy development is a waste of time and, potentially, could make matters worse. This happens time and time again. Prohibition, for example, was exactly this sort of failure, and cost much more than it benefited, which is why it was considered a failure [2].

We keep making the same mistake. We have plenty of data and reporting, courtesy of the DEA, that the ongoing crackdown on methamphetamine production has created bigger and blacker markets, as well as mayhem and violence in Mexico, all without much impact on domestic drug use. Here is the DEA Statistics & Facts page – have a look and then make up your own mind.

I started writing about the potential negative impacts of restricting access to biological technologies in 2003 (PDF), including the likely emergence of black markets in the event of overregulation. I looked around for any data I could find on the impacts of regulating democratized technologies. In particular, I happened upon the DEA's first reporting of the impacts of the then newly instituted crackdown on domestic methamphetamine production and distribution. Even in 2003, the DEA was already observing that it had created bigger, blacker markets – that are by definition harder to surveil and disrupt – without impacting meth use. The same story has played out similarly in cocaine production and distribution, and more recently in the markets for “bath salts”, aka “legal highs”

That is, we have multiple, clear demonstrations that, rather than improving the world, restricting access to distributed production can instead cause harm. But, really, when has this ever worked? And why do people think going down the same path in the future will lead anywhere else? I am still looking for data – any data at all – that supports the assertion that regulating biological technologies will have any different result. If you have such data, bring it. Let's see it. In that absence of that data, policy proposals that lead with regulation and restriction are doomed to repeat the failures of the past. It has always seemed to me like a terrible idea to transfer such policies over to biosecurity. Yet that is exactly what the Commentary proposes.

Brewing black markets.

The fundamental problem with the approach advocated in the Commentary is that security policies, unlike beer brewing, do not work equally well across all technical and economic scales. What works in one context will not work in another. Nuclear weapons can be secured by guns, gates, and guards because they are expensive to build and the raw materials are hard to come by, so heavy touch regulation works just fine. There are some industries – as it happens, beer brewing – where only light touch regulation works. In the U.S., we tried heavy touch regulation in the form of Prohibition, and it failed miserably, creating many more problems than it solved. There are other industries, for example DNA and gene synthesis, in which even light touch regulations are a bad idea. Indeed, light touch regulation of has already created the problem it was supposed to prevent, namely the existence of DNA synthesis providers that 1) intentionally do not screen their orders and 2) ship to countries and customers that are on unofficial black lists.

For those who don't know this story: In early 2013, the International Council for the Life Sciences (ICLS) convened a meeting in Hong Kong to discuss "Codes of Conduct" for the DNA synthesis industry, namely screening orders and paying attention to who is doing the ordering. According to various codes and guidelines promulgated by industry associations and the NIH, DNA synthesis providers are supposed to reject orders that are similar to sequences that code for pathogens, or genes from pathogens, and it is suggested that they do not ship DNA to certain countries or customers (the unofficial black list). Here is a PDF of the meeting report; be sure to read through Appendix A.

The report is fairly anodyne in describing what emerged in discussions. But people who attended have since described in public the Chinese DNA synthesis market as follows. There are 3 tiers of DNA providers. The first tier is populated with companies that comply with the various guidelines and codes promulgated internationally because this tier serves international markets. There is a second tier that appears to similarly comply, because while they serve primarily the large internal market these companies have aspirations of also serving the international market. There is a third tier that exists specifically to serve orders from customers seeking ways around the guidelines and codes. (One company in this tier was described to me as a "DNA shanty", with the employees living over the lab.) Thus the relatively light touch guidelines (which are not laws) have directly incentivized exactly the behavior they were supposed to prevent. This is not a black market, per se, and cannot be accurate described as illegal, so let's call it a "grey market".

I should say here that this is entirely consistent with my understanding of biotech in China. In 2010, I attended a warm up meeting for the last round of BWC negotiations. After that meeting, I chatted with one of the Chinese representatives present, hoping to gain a little bit of insight into the size of the Chinese bioeconomy and the state of the industry. My query was met with frank acknowledgment that the Chinese government isn't able to keep track of the industry, does't know how many companies are active, or how many employees they have, or what they are up to, and so doesn't hold out much hope of controlling the industry. I covered this a bit in my 2012 Biodefense Net Assessment report for DHS. (If anyone has any new insight into the Chinese biotech industry, I am all ears.) Not that the U.S. or Europe is any better in this regard, as our mechanisms for tracking the biotech industry are completely dysfunctional, too. There could very well be DNA synthesis providers operating elsewhere that don't comply with the recommended codes of conduct: we have no real means of broadly surveying for this behavior. There are no physical means either to track it remotely or to control it.

I am a little bit sensitive about the apparent emergence of the DNA synthesis grey market, because I warned for years in print and in person that DNA screening would create exactly this outcome. I was condescendingly told on many occasions that it was foolish to imagine a black market for DNA. And then we have to do something, right? But it was never very complicated to think this through. DNA is cheap, and getting cheaper. You need this cheap DNA as code to build more complicated, more valuable things. Ergo, restrictions on DNA synthesis will incentivize people to seek, and to provide, DNA outside any control mechanism. The logic is pretty straightforward, and denying it is simply willful self-deception. Regulation of DNA synthesis will never work. In the vernacular of the day: because economics. To make it even simpler: because humans.

So the idea that people are still suggesting proscription of certain DNA sequences is a viable route to security just rankles. And it is demonstrably counterproductive. The restrictions incentivize the bad behavior they are supposed to prevent, probably much earlier than might have happened otherwise. The take home message here is that not all industries are the same, because not all technologies are the same, and that our policy approaches should take into account these differences rather than papering over them. In particular, restricting access to information in our modern economy is a losing game. 

Where do we go from here?

We are still at the beginning of biotech. This is the most important time to get it right. This is the most important time not to screw up and make things worse. And it is important that we are at the beginning, because things are not yet screwed up.

Conversely, we are well down the road in developing and deploying drug policies, with much damage done. To be sure, despite the accumulated and ongoing costs, you have to acknowledge that it is not at all clear that suddenly legalizing drugs such as meth or cocaine would be a positive step. I am not in any way making that argument. But it is abundantly clear that drug enforcement activities have created the world we live in today. Was there an alternative? If the DEA had been able to do cost/benefit analysis of the impacts of its actions – that is, predict the emergence of DTOs and their role in production, trafficking, and violence – would the policy response 15 years ago have been any different? If Nixon had more thoughtfully considered even what was known 50 years about about the impacts of proscription, would he have launched the war on drugs? That is a hard question, because drug policy is clearly driven more by stories and personal politics than by facts. I am inclined to think the present drug policy mess was inevitable. Even with the DEA's self-diagnosed role in creating and sustaining DTOs, the national conversation is still largely dominated by “the war on drugs”. And thus the first reaction to the prospect of microbial narcotics production is to employ strategies and tactics that have already failed elsewhere. I would hate to think we are in for a war on microbes, because that is doomed to failure.

But we haven't yet made all those mistakes with biological technologies. I continue to hope that, if nothing else, we will avoid making things worse by rejecting policies we already know won't work. 

Notes:

[1] Pause here to note that even this early in the set up, the Commentary conflates via words and images the use of yeast in home brew narcotics with centralized brewing of narcotics by cartels. These are two quite different, and are perhaps mutually exclusive, technoeconomic futures. Drug cartels very clearly have the resources to develop technology. Depending on whether you listen to the U.S. Navy or the U.S. Coast Guard, either 30% or 80% of the cocaine delivered to the U.S. is transported at some point in semisubmersible cargo vessels or in fully submersible cargo submarines. These 'smugglerines', if you will, are the result of specific technology development efforts directly incentivized by governmental interdiction efforts. Similarly, if cartels decide that developing biological technologies suits their business needs, they are likely to do so. And cartels certainly have incentives to develop opioid-producing yeast, because fermentation usually lowers the cost of goods between 50% and 90% compared to production in plants. Again, cartels have the resources, and they aren't stupid. If cartels do develop these yeast strains, for competitive reasons they certainly won't want anyone else to have them. Home brew narcotics would further undermine their monopoly.

[2] Prohibition was obviously the result of a complex socio-political situation, just as was its repeal. If you want a light touch look at the interaction of the teetotaler movement, the suffragette movement, and the utility of Prohibition in continued repression of freed slaves after the Civil War, check out Ken Burns's “Prohibition” on Netflix. But after all that, it was still a dismal failure that created more problems than it solved. Oh, and Prohibition didn't accomplish its intended aims. Anheuser-Busch thrived during those years. Its best selling products at the time were yeast and kettles (see William Knoedleseder's Bitter Brew)...

Using programmable inks to build with biology: mashing up 3D printing and biotech

Scientists and engineers around the globe dream of employing biology to create new objects. The goal might be building replacement organs, electronic circuits, living houses, or cowborgs and carborgs (my favorites) that are composed of both standard electromechanical components and novel biological components. Whatever the dream, and however outlandish, we are getting closer every day.

Looking a bit further down the road, I would expect organs and tissues that have never before existed. For example, we might be able to manufacture hybrid internal organs for the cowborg that process rough biomass into renewable fuels and chemicals. Both the manufacturing process and the cowborg itself might utilize novel genetic pathways generated in DARPA's Living Foundries program. The first time I came across ideas like the cowborg was in David Brin's short story "Piecework". I've pondered this version of distributed biological manufacturing for years, pursuing the idea into microbrewing, and then to the cowborg, the economics of which I am now exploring with Steve Aldrich from bio-era.

Yet as attractive and powerful as biology is as a means for manufacturing, I am not sure it is powerful enough. Other ways that humans build things, and that we build things that build things, are likely to be part of our toolbox well into the future. Corrosion-resistant plumbing and pumps, for example, constitute very useful kit for moving around difficult fluids, and I wouldn't expect teflon to be produced biologically anytime soon. Photolithography, electrodeposition, and robotics, now emerging in the form of 3D printing, enable precise control over the position of matter, though frequently using materials and processes inimical to biology. Humans are really good at electrical and mechanical engineering, and we should build on that expertise with biological components.

Let's start with the now hypothetical cowborg. The mechanical part of a cowborg could be robotic, and could look like Big Dog, or perhaps simply a standard GPS-guided harvester, which comes standard with air conditioning and a DVD player to keep the back-up human navigation system awake. This platform would be supplemented by biological components, initially tanks of microbes, that turn raw feedstocks into complex materials and energy. Eventually, those tanks might be replaced by digestive organs and udders that produce gasoline instead of milk, where the artificial udders are enabled by advances in genetics, microbiology, and bioprinting. Realizing this vision could make biological technologies part of literally anything under the sun. In a simple but effective application along these lines, the ESA is already using "burnt bone charcoal" as a protective coating on a new solar satellite.

But there is one persistent problem with this vision: unless it is dead and processed, as in the case of the charcoal spacecraft coating, biology tends not to stay where you put it. Sometimes this will not matter, such as with many replacement transplant organs that are obviously supposed to be malleable, or with similar tissues made for drug testing. (See the recent Economist article, "Printing a bit of me", this CBS piece on Alexander Seifalian's work at University College London, and this week's remarkable news out of Anthony Atala's lab.) Otherwise, cells are usually squishy, and they tend to move around, which complicates their use in fabricating small structures that require precise positioning. So how do you use biology to build structures at the micro-scale? More specifically, how do you get biology to build the structures you want, as opposed to the structures biology usually builds?

We are getting better at directing organisms to make certain compounds via synthetic biology, and our temporal control of those processes is improving. We are inspired by the beautiful fabrication mechanisms that evolution has produced. Yet we still struggle to harness biology to build stuff. Will biological manufacturing ever be as useful as standard machining is, or as flexible as 3D printing appears it will be? I think the answer is that we will use biology where it makes sense, and we will use other methods where they make sense, and that in combination we will get the best of both worlds. What will it mean when we can program complex matter in space and time using a fusion of electromechanical control (machining and printing) biochemical control (chemistry and genetics)? There are several recent developments that point the way and demonstrate hybrid approaches that employ the 3D printing of biological inks that subsequently display growth and differentiation.

Above is a slide I used at the recent SynBERC retreat in Berkeley. On the upper left, Organovo is now shipping lab-produced liver tissue for drug testing. This tissue is not yet ready for use in transplants, but it does display all the structural and biochemical complexity of adult livers. A bit further along in development are tracheas from Harvard Biosciences, which are grown from patient stem cells on 3D-printed scaffolds (Claudia Castillo was the first recipient of a transplant like this in 2007, though her cells were grown on a cadaver windpipe first stripped of the donor's cells). And then we have the paper on the right, which really caught my eye. In that publication, viruses on a 3D woven substrate were used to reprogram human cells that were subsequently cultured on that substrate. The green cells above, which may not look like much, are the result of combining 3D fabrication of non-living materials with a biological ink (the virus), which in combination serve to physically and genetically program the differentiation and growth of mammalian cells, in this case into cartilage. That's pretty damn cool.

Dreams of building with biology

Years ago, during the 2003 "DARPA/ISAT Synthetic Biology Study", we spent considerable time discussing whether biology could be used to rationally build structures like integrated circuits. The idea isn't new: is there a way to get cells to build structures at the micro- or nano-scale that could help replace photolithography and other 2D patterning techniques used to build chips? How can humans make use of cells -- lovely, self-reproducing factories -- to construct objects at the nanometer scale of molecules like proteins, DNA, and microtubules?

Cells, of course, have dimensions in the micron range, and commercial photolithography was, even in 2003, operating at about the 25 nanometer range (now at about 15 nm). The challenge is to program cells to lay down structures much smaller than they are. Biology clearly knows how to do this already. Cells constantly manufacture and use complex molecules and assemblies that range from 1 to 100 nm. Many cells move or communicate using extensions ("processes") that are only 10-20 nanometers in width but tens microns in length. Alternatively, we might directly use synthetic DNA to construct a self-assembling scaffold at the nano-scale and then build devices on that scaffold using DNA-binding proteins. DNA origami has come a long way in the last decade and can be used to build structures that span nanometers to microns, and templating circuit elements on DNA is old news. We may even soon have batteries built on scaffolds formed by modified, self-assembling viruses. But putting all this together in a biological package that enables nanometer-scale control of fabrication across tens of centimeters, and doing it as well as lithography, and as reproducibly as lithography, has thus far proved difficult.

Conversely, starting at the macro scale, machining and 3D printing work pretty well from meters down to hundreds of microns. Below that length scale we can employ photolithography and other microfabrication methods, which can be used to produce high volumes of inexpensive objects in parallel, but which also tend to have quite high cost barriers. Transistors are so cheap that they are basically free on a per unit basis, while a new chip fab now costs Intel about $10 billion.

My experiences working on different aspects of these problems suggest to me that, eventually, we will learn to exploit the strengths of each of the relevant technologies, just as we learn to deal with their weaknesses; through the combination of these technologies we will build objects and systems that we can only dream of today.

Squishy construction

Staring through a microscope at fly brains for hours on end provides useful insights into the difference between anatomy and physiology, between construction and function. In my case, those hours were spent learning to find a particular neuron (known as H1) that is the output of the blowfly motion measurement and computation system. The absolute location of H1 varies from fly to fly, but eventually I learned to find H1 relative to other anatomical landmarks and to place my electrode within recording range (a few tens of microns) on the first or second try. It's been long known that the topological architecture (the connectivity, or wiring diagram) of fly brains is identical between individuals of a given species, even as the physical architecture (the locations of neurons) varies greatly. This is the difference between physiology and anatomy.

The electrical and computational output of H1 is extremely consistent between individuals, which is what makes flies such great experimental systems for neurobiology. This is, of course, because evolution has optimized the way these brains work -- their computational performance -- without the constraint that all the bits and pieces must be in exactly the same place in every brain. Fly brains are constructed of squishy matter, but the computational architecture is quite robust. Over the last twenty years, humans have learned to grow various kinds of neurons in dishes, and to coax them into connecting in interesting ways, but it is usually very hard to get those cells to position themselves physically exactly where you want them, with the sort of precision we regularly achieve with other forms of matter.

Crystalline construction

The first semiconductor processing instrument I laid hands on in 1995 was a stepper. This critical bit of kit projects UV light through a mask, which contains the image of a structure or circuit, onto the surface of a photoresist-covered silicon wafer. The UV light alters the chemical structure of the photoresist, which after further processing eventually enables the underlying silicon to be chemically etched in a pattern identical to the mask. Metal or other chemicals can be similarly patterned. After each exposure, the stepper automatically shifts the wafer over, thereby creating an array of structures or circuits on each wager. This process enables many copies of a chip to be packed onto a single silicon wafer and processed in parallel. The instruments on which I learned to process silicon could handle ~10 cm diameter wafers. Now the standard is about 30 cm, because putting more chips on a wafer reduces marginal processing costs. But it isn't cheap to assemble the infrastructure to make all this work. The particular stepper I used (this very instrument, as a matter of fact), which had been donated to the Nanofabrication Facility at Cornell and which was ancient by the time I got to it, contained a quartz lens I was told cost about $1 million all by itself. The kit used in a modern chip fab is far more expensive, and the chemical processing used to fabricate chips is quite inimical to cells. Post-processing, silicon chips can be treated in ways that encourages cells to grow on them and even to form electrical connections, but the overhead to get to that point is quite high.

Arbitrary construction

The advent of 3D printers enables the reasonably precise positioning of materials just about anywhere. Depending on how much you want to spend, you can print with different inks: plastics, composites, metals, and even cells. This lets you put stuff where you want it. The press is constantly full of interesting new examples of 3D printing, including clothes, bone replacements, objects d'art, gun components, and parts for airplanes. As promising as all this is, the utility of printing is still limited by the step size (the smallest increment of the position of the print head) and the spot size (the smallest amount of stuff the print head can spit out) of the printer itself. Moreover, printed parts are usually static: once you print them, they just sit there. But these limitations are already being overcome by using more complex inks.

Hybrid construction

If the ink used in the printer has the capacity to change after it gets printed, then you have introduced a temporal dimension into your process: now you have 4D printing. Typically, 4D printing refers to objects whose shape or mechanical properties can be dynamically controlled after construction, as with these 3D objects that fold up after being printed as 2D objects. But beyond this, if you combine squishy, crystalline, and arbitrary construction, you get a set of hybrid construction techniques that allows programming matter from the nanoscale to the macroscale in both time and space.

Above is a slide from a 2010 DARPA study on the Future of Manufacturing, from a talk in which I tried to articulate the utility of mashing up 3D printing and biotech. We have already seen the first 3D printed organs, as described earlier. Constructed using inks that contain cells, even the initial examples are physiologically similar to natural organs. Beyond tracheas, printed or lab-growth organs aren't yet ready for general use as transplants, but they are already being used to screen drugs and other chemicals for their utility and toxicity. Inks could also consist of: small molecules (i.e. chemicals) that react with each other or the environment after printing; DNA and proteins that serve structural, functional (say, electronic), or even genetic roles after printing; viruses that form structures or are that are intended to interact biologically with later layers; cells that interact with each other or follow some developmental program defined genetically or by the substrate, as demonstrated in principle by the cartilage paper above.

The ability to program the three-dimensional growth and development of complex structures will have transformative impacts throughout our manufacturing processes, and therefore throughout our economy. The obvious immediate applications include patient-specific organs and materials such as leather, bone, chitin, or even keratin (think vat-grown ivory), that are used in contexts very different than we are used to today.

It is hard to predict where this is going, of course, but any function we now understand for molecules or cells can be included in programmable inks. Simple 2-part chemical reactions will eventually be common in inks, eventually transitioning to more complex inks containing multiple reactants, including enzymes and substrates. Eventually, programmable printer inks will employ the full complement of genes and biochemistry present in viruses, bacteria, and eukaryotic cells. Beyond existing genetics and biochemistry, new enzymes and genetic pathways will provide materials we have never before laid hands on. Within DARPA's Living Foundries program is the 1000 Molecules program, which recently awarded contracts to use biology to generate "chemical building blocks for accessing radical new materials that are impossible to create with traditional petroleum-based feedstocks".

Think about that for a moment: it turns out that of the entire theoretical space of compounds we can imagine, synthetic chemistry can only provide access to a relatively small sample. Biology, on the other hand, in particular novel systems of (potentially novel) enzymes, can be programmed to synthesize a much wider range of compounds. We are just learning how to design and construct these pathways; the world is going to look very different in ten years' time. Consequently, as these technologies come to fruition, we will learn to use new materials to build objects that may be printed at one length scale, say centimeters, and that grow and develop at length scales ranging from nanometers to hundreds of meters.

Just as hybrid construction that combines the features of printers and inks will enable manufacturing on widely ranging length scales, so will it give us access to a wide range of time scales. A 3D printer presently runs on fairly understandable human time scales of seconds to hours. For the time being, we are still learning how to control print heads and robotic arms that position materials, so they move fairly slowly. Over time, the print head will inevitably be able to move on time scales at least as short as milliseconds. Complex inks will then extend the reach of the fabrication process into the nanoseconds on the short end, and into the centuries on the long end.

I will be the first to admit that I haven't the slightest idea what artifacts made in this way will do or look like. Perhaps we will build/grow trees the size of redwoods that produce fruit containing libations rivaling the best wine and beer. Perhaps we will build/grow animals that languidly swim at the surface of the ocean, extracting raw materials from seawater and photosynthesizing compounds that don't even exist today but that critical to the future economy.

These examples will certainly prove hopelessly naive. Some problems will turn out to be harder than they appear today, and other problems will turn out to be much easier than they appear today. But the constraints of the past, including the oft-uttered phrase "biology doesn't work that way", do not apply. The future of engineering is not about understanding biology as we find it today, but rather about programming biology as we will build it tomorrow.

What I can say is that we are now making substantive progress in learning to manipulate matter, and indeed to program matter. Science fiction has covered this ground many times, sometimes well, sometimes poorly. But now we are doing it in the real world, and sketches like those on the slides above provide something of a map to figure out where we might be headed and what our technical capabilities will be like many years hence. The details are certainly difficult to discern, but if you step back a bit, and let your eyes defocus, the overall trajectory becomes clear.

This is a path that John von Neumann and Norbert Wiener set out on many decades ago. Physics and mathematics taught us what the rough possibilities should be. Chemistry and materials science have demonstrated many detailed examples of specific arrangements of atoms that behave physically in specific ways. Control theory has taught us both how organisms behave over time and how to build robots that behave in similar ways. Now we are learning to program biology at the molecular level. The space of the possible, of the achievable, is expanding on a daily basis. It is going to be an interesting ride.

How Competition Improves DNA Sequencing

The technology that enables reading DNA is changing very quickly.  I've chronicled how price and productivity are each improving in a previous post; here I want to try to get at how the diversity of companies and technologies is contributing to that improvement.
As I wrote previously, all hell is breaking loose in sequencing, which is great for the user.  Prices are falling and the capabilities of sequencing instruments are skyrocketing.  From an analytical perspective, the diversity of platforms is a blessing and a curse.  There is a great deal more data than just a few years ago, but it has become quite difficult to directly compare instruments that produce different qualities of DNA sequence, produce different read lengths, and have widely different throughputs.
I have worked for many years to come up with intuitive metrics to aid in understanding how technology is changing.  Price and productivity in reading and writing DNA are pretty straightforward.  My original paper on this topic (PDFalso looked at the various components of determining protein structures, which, given the many different quantifiable tasks involved, turned out to be a nice way to encapsulate a higher level look at rates of change.
In 2007, with the publication of bio-era's Genome Synthesis and Design Futures, I tried to get at how improvements in instrumentation were moving us toward sequencing whole genomes. The two axes of the relevant plot were 1) read length -- the length of each contiguous string of bases read by an instrument, critical to accurate assembly of genomes or chromosomes that can be hundreds of millions of bases long -- and 2) the daily throughput per instrument -- how much total DNA each instrument could read.  If you have enough long reads you can use this information as a map to assemble many shorter reads into the contiguous sequence.
Because there weren't very many models of commercially available sequencers in 2007, the original plot didn't have a lot of data on it (the red squares and blue circles below).  But the plot did show something interesting, which was that two general kinds of instruments were emerging at that time: those that produced long reads but had relatively limited throughput, and those that produced short reads but could process enormous amounts of sequence per day.  The blue dots below were data from my original paper, and the red squares were derived from a Science news article in 2006 that looked at instruments said to be emerging over the next year or so.
I have now pulled performance estimates out of several papers assessing instruments currently on the market and added them to the plot (purple triangles).  The two groupings present in 2007 are still roughly extant, though the edges are blurring a bit. (As with the price and productivity figures, I will publish a full bibliography in a paper later this year.  For now, this blog post serves as the primary citation for the figure below.)

I am still trying to sort out the best way to represent the data (I am open to suggestions about how do it better).  At this point, it is pretty clear that the two major axes are insufficient to truly understand what is going on, so I have attempted to add some information regarding the release schedules of new instruments.  Very roughly, we went from a small number of first generation instruments in 2003 to a few more real instruments in 2006 that performed a little better in some regards, plus a few promised instruments that didn't work out for one reason or another.  However, starting in about 2010, we began to see seriously improved instruments being released on an increasingly rapid schedule.  This improvement is the result of competition not just between firms, but also between technologies.  In addition, some of what we are seeing is the emergence of instruments that have niches; long reads but medium throughput, short reads but extraordinary throughput -- combine these two capabilities and you have the ability to crank out de novo sequences at pretty remarkable rate.  (For reference, the synthetic chromosome Venter et al published a few years ago was about one million bases; human chromosomes are in the range of 60 to 250 million bases.)
Carlson_Seq_Performance_Comp_2012a.png
And now something even more interesting is going on.  Because platforms like PacBio and IonTorrent can upgrade internal components used in the actual sequencing, where those components include hardware, software, and wetware, revisions can result in stunning performance improvements.  Below is a plot with all the same data as above, with the addition of one revision from PacBio.  It's true that the throughput per instrument didn't change so much, but such long read lengths mean you can process less DNA and still rapidly produce high resolution sequence, potentially over megabases (modulo error rates, about which there seems to be some vigorous discussion).  This is not to say that PacBio makes the best overall instrument, nor that the company will be commercially viable, but rather that the competitive environment is producing change at an extraordinary rate.
Carlson_Seq_Performance_Comp_2012b.png
If I now take the same plot as above and add a single (putative) MinION nanopore sequencer from Oxford Nanopore (where I have used their performance claims from public presentations; note the question mark on the date), the world again shifts quite dramatically.  Oxford also claims they will ship GridION instruments that essentially consist of racks of MinIONs, but I have not even tried to guess at the performance of that beast.  The resulting sequencing power will alter the shape of the commercial sequencing landscape.  Illumina and Life are not sitting still, of course, but have their own next generation instruments in development.  Jens Gundlach's (PDF) team at the University of Washington has demonstrated a nanopore that is argued to be better than the one Oxford uses, and I understand commercialization is proceeding rapidly, though of course Oxford won't be sitting still either.

One take home message from this, which is highlighted by taking the time to plot this data, is that over the next few years sequencing will become highly accurate, fast, and commonplace.  With the caveat that it is difficult to predict the future, continued competition will result in continued price decreases.
A more speculative take home emerges if you consider the implications of the MinION.  That device is described as a disposable USB sequencer.  If it -- or anything else like it -- works as promised, then some centralized sequencing operations might soon reach the end of their lives.  There are, of course, different kinds of sequencing operations.  If I read the tea leaves correctly, Illumina just reported that its clinical sequencing operations brought in about as much revenue as their other operations combined, including instrument sales.  That's interesting, because it points to two kinds of revenue: sales of boxes and reagents that enable other people to sequence, and certified service operations that provide clinically relevant sequence data.  At the moment, organizations like BGI appear to be generating revenue by sequencing everything under the sun, but cheaper and cheaper boxes might mean that the BGI operations outside of clinical sequencing aren't cost effective going forward.  Once the razors (electric, disposable, whatever) get cheap enough, you no longer bother going to the barber for a shave.
I will continue to work with the data in an effort to make the plots simpler and therefore hopefully more compelling.

Are These The Drones We're Looking For? (Part IV)

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

The Coming War Overhead

Are you ready for drone dogfights?  How about combat flocks and swarms?  They are coming.  And they will be over your head before you know it.

From my office window I am fortunate to often see eagles and hawks in flight over Seattle's Lake Union. These raptors are regularly harassed by smaller birds attempting to run off potential predators or competitors.  Each species - whether predator and prey - clearly employs different tactics based on size, speed, armaments, number of combatants, etc.  Within a few years this aerial combat will become a frequent sight in the U.S., but rather than raptors, crows, and gulls, the combatants will be drones of all shapes and sizes.  I am not at all sure that we are adequately prepared, or whether we are adequately planning, for the strange world ahead.

This battle will be engaged on many different fronts. Left, right, black hat, white hat, criminal, law enforcement: all will have the same tools at their disposal. Even if federal, state, and local agencies have early access to hand-me down technologies developed for military applications, they will be up against a large number of innovators, many of whom come from open-source, hacker communities where innovation runs faster than anywhere else.

I have outlined the playing field (Quidditch pitch?) in prior installments. The capability to produce and hack drones is already widely distributed. Drones can now cooperate in swarms to build structures, play music, and play catch. Economic incentives - as well as the cool factor - strongly favor the development of ever less expensive and ever more capable drones to be used for photography, shipping, data storage, and communications, just to name a few applications. As drones and the services they provide become more valuable, and as they inevitably become useful for supplying illicit products such as drugs and pirated music and movies, attempts at regulating drone use are likely to increase demand. This is the very definition of 'perverse incentives'. Yet with the capability to produce drones already so democratized, the only way to limit their use is likely to be direct force. And thus the combat capabilities of even simple drones will, like printing, file-sharing, and every market for every illicit drug, become an arena of continual technological oneupmanship. Drone enthusiasts who work on national security issues have already started a "Drone Smackdown" tourney to explore tactics in their spare time.

So it isn't at all hard to imagine that somewhere down this road we will see a mashup of cheap drones and the sort of Shanzai warfare recently seen in Libya, and now in Syria, in which irregular forces hack together their own knock-off versions of much more expensive (and much more capable) weapons systems they have probably only seen on the Internet. But those DIY weapons systems seem to have done the job. So, too, will Shanzai combat drones.

Here is what we can look forward to: projectiles, nets, lasers or LEDs to blind cameras, strings dropped or shot onto rotors, aerosol cans turned into flying flamethrowers, salt water spray, chaff to disrupt near-field or optical communications, and simple electronic jamming. And each offensive mode will breed countermeasures. The fruits of idle and motivated minds will germinate. Almost any cheap drone will probably have a spare servo circuit or two to control on-board munitions. Adding capacity will be trivial. Remember: many drones are already flying smart phones, so whatever the mission, there's an app for that (see Pt I).

There will be casualties in these confrontations.  The drones, certainly, will suffer.  But sometimes the countermeasures will miss, causing damage to whatever and whomever is downrange.  And when drones are successfully destroyed, they will fall down.  Onto things.  And onto people.  Such as when a Sheriff's Department in Texas dropped a big drone onto it's own SWAT team. Fortunately, the team was sheltered inside their armored car; we should all be so lucky.

In short, the drivers for an arms race are multifold: potential invasion of privacy by government or commercial drones (see Pt. III), attack and defense of file sharing swarms, attacks on (or hijacking of) and defense of cargo drones.  As costs fall, and capabilities improve, novel applications will emerge that will in turn drive ever more innovation in drone weapons systems, especially in countermeasures.

Regardless of what the rules are, of what the FAA and other authorities decide to allow, the economic incentives to employ drones as I have described above will drive behavior. There are just no two ways about it. We will be seeing some version of the world I have described in this series of posts. Consequently, any regulatory should facilitate the safe use of drones rather than attempt to constrain their use. What troubles me, and what motivated me to explore this topic, is that ongoing discussions of drone regulations will completely miss both the economic drivers and the technological ferment making it all possible. I'd like to be wrong about that, but history is likely to be an excellent guide. In the case of drones, as in every other attempt to regulate a democratized technology that serves a large and growing market, black markets will emerge. Nefarious applications of drones are inevitable, and poorly conceived regulation will be an accelerant that makes the problem worse. This is not an argument that all regulation is bad, merely an argument that regulation will be as poorly considered and poorly applied to drones as it was in all the other technological cases I have studied.

Finally, we must remember, first and foremost, that humans will continue to be the targets of armed drones wherever they fly. And, like the raptors that inspired me to think about drone combat, U.S. innovations in arming drones will come home to roost. That is the world we should be preparing for; have no illusions otherwise.

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

Are These The Drones We're Looking For? (Part III)

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

Photos, Bullets, and Smuggling

Unmanned aerial photography drones look to be the next big thing. They also look to be highly annoying and invasive. Earlier this year, the New York Times described a Los Angeles drone operator who had already been approached by paparazzi to take photos of celebrities.  Until regulatory issues got in the way, his previous job was in aerial real-estate photography, where there is also big demand. The Times article describes how the FAA must decide on rules for commercial drone use in aerial photography, among many other applications, by 2015. But it is the paparazzi gig that should get you thinking.

The reason the paparazzi take photos of famous people is money.  Famous people have money, and notoriety, and other people for some reason pay to peek in their windows and, frankly, up their skirts.  What is going to happen when paparazzi start to use drones?  Let's call these robots dronarazzi. (According to Wikipedia, the word paparazzi comes from Fellini's La Dolce Vita and is meant to suggest an annoying, buzzing, insect.  My neologism may be superfluous given the racket current drones make, but it seems important to distinguish between humans and drones, don't you think?)  Very quickly after dronarazzi appear, famous people will attempt to use their money to get laws passed against them. Those laws will turn out to be unenforceable due to the profusion of hardware so cheap that it is disposable.  Famous, wealthy people will then spend some of their money to physically remove the annoyance of the dronarazzi.  And there it begins: drone countermeasures.

Drones have already been the subject of armed confrontation within U.S. borders.  Recently, hunters in Texas unhappy about a surveillance drone flown by animal rights activists proceeded to pretend it was a game bird.  The shoot-down was likely illegal; undoubtedly lawsuits are afoot.  As more drones take to the sky, there will certainly be more such confrontations.  Surveillance drones flown by law enforcement agencies, the DEA, and U.S. Customs will certainly be targets.  Even before law enforcement agencies find themselves involved in daily skirmishes we will see countermeasures innovations crop up in -- no surprise here -- California.  Hollywood, to be specific. I would expect the first dronarazzi shoot-downs to happen fairly soon, even before the FAA sorts out the relevant regulations. And given how frequently paparazzi crash their cars into each other, their subjects, and bystanders, we can expect dronarazzi to cause analogous physical damage.

But look ahead just a bit, beyond photography, to a time when drones are providing real-time traffic or crowd monitoring, perhaps combined with face recognition, which you, the surveilled, may not want to allow.  What will the market look like for gizmos that prevent airborne cameras from imaging your face?  Or what about when small, VTOL drones are actually moving stuff around in the real world.  That stuff could conceivably be your latest, packet-switched delivery from Amazon, or it could be the latest methamphetamine delivery from your drug dealer; it will be hard to tell the difference without physical inspection.  Law enforcement will want to track -- and almost certainly to inspect -- those cargoes, and many a sender and recipient will want to thwart both tracking and inspection.

The rules for drone flight set by the FAA will probably attempt to spell out specific allowed uses.  This decision will be informed both by 9/11 and by recent U.S. combat experience. We might see the definition of specific drone flight corridors, or specific drone flight characteristics, and federal, state, and local authorities may demand the ability to override the controls on drones through back doors in software.  But those back doors will be vulnerable to misuse, and are likely to be nailed shut even by above-board drone operators.  Who wants to loose control of a drone to the hacker kid next door? And, obviously, the economic incentive to cheat in the face of any drone flight or construction regulations will be absolutely enormous.  Many people will make the calculation (probably correctly) that, in the unlikely event that a suspect drone itself is caught or disabled, the operator will walk away scot-free because it simply may not be possible to identify her.  Yet I suspect that whatever the rules forwarded by the FAA, and whatever powers of intervention in drone activity are given to law enforcement, that it will all come down to whether people can be physically prevented from doing what they want with drones.  That is, can drone flight rules actually be enforced without the hands-on ability to capture or shut down scofflaw drones and operators?  The answer, very likely, is no, especially given the existing community of drone hackers who are proficient at producing both hardware and software. This brings us back to the proliferation of physical and electronic countermeasures.  And I question whether we are adequately planning for the future.

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

Are These The Drones We're Looking For? (Part II)

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

Pirate Hunting in the Clouds

Piracy is a perennial weed. For example, coordinated efforts to shut down electronic file sharing have had little effect; you can still find anything you want online.  The reason, of course, is that pirate hunters are always playing catchup to technological innovation that facilitates the anonymous movement of bits.  That should be no surprise to anyone involved, because the same sort of technological struggle has been present in print piracy since the days of Johannes Gutenberg.  Music, game, and movie piracy is just the same game on a new field.

The latest innovation in file sharing looks to be drones.  Two groups, The Pirate Bay (TPB) and Electronic Countermeasures, are building swarms of file-sharing drones meant to decentralize information storage and communications. TPB, in particular, propounds an ideology of sharing everything they can get their hands on by any means available. Says one contributor, "Everyone knows WHAT TPB is. Now they're going to have to think about WHERE TPB is."  File sharing may soon be located both metaphorically and physically in the clouds.

How will pirate-hunters respond to airborne, file-sharing drones?  Attempts will certainly be made to regulate airborne networks.  But that approach will probably fail, because regulation rarely makes headway against ideology.  Along with regulation will come electronic efforts to disrupt drone networks by jamming broadcasts and disrupting intraswarm communications.  That is likely to fail as well, because the drone networks will employ frequency bands used for many other devices, which will make drone-specific jamming technologically implausible, especially in signal-rich, urban environments.  Finally, both government and industry will embark on physically attacking the drones (to which I return to in a moment).  But that isn't going to work either, because drones will soon be cheap enough to fire and forget.

At the moment, the hardware for each of the file-sharing drones is a bit pricy, north of $1000.  Inevitably, the cost will come down.  Quite capable toy quadcopters are available for only a few hundred dollars, whereas just a few years ago the same bird cost thousands.  You can be sure that other form factors will be used, too.  Fixed-wing and lighter-than-air drones are experiencing the same pressure for innovation as four-, six-, and eight-bladed 'copters.  Regardless of what sort of drones are employed in the network, any concerted effort to physically disrupt drones will simply result in more innovation and cost reduction by those who want to keep them in the air.  The economic motivation to fly drones in the face of regulations is compelling, whether for smuggling atoms or bits, and as a result there is every reason to think there will be clouds of drones in the air relatively soon.

As we start down this road, what's missing from the conversation is a concerted effort to ask, "What's next?"  Authorities might imagine they can constrain access to the physical hardware, but the manufacturing of drones is already well beyond anyone's control.  Any attempt at restricting access or use will merely create perverse incentives for greater innovation.

Hackers regularly modify commercially available drones to their own ends.  Beyond what comes in a kit, structural components for drones can be 3D-printed, with open source CAD files and parts lists available at Thingverse and other repositories.  Whatever mechanical parts (such as propellers) that are not now easily printable will undoubtedly soon be, and in any case can be easily molded in a variety of plastics.  MIT just announced a project to develop printable robots.  While the MIT paper 'bots are described as being terrestrial, you have to imagine that boffins are already cooking up aerial versions.  Contributing to the air of innovation, DARPA even runs a crowd-sourced UAV design competition, UAVForge.

So much for the hardware; what about control software? The University of Pennsylvania's Vijay Kumar and his collaborators at the GRASP Lab literally have drones jumping through hoops on command, and cooperating both to fly in formation and to build large structures. This academic project will certainly result in the publication of papers describing the relevant control algorithms, and quite probably the publication of the control code itself.  Imagining GRASP Lab projects out in the wild gives you something to think about.  When you put all this together, the combination of distributed designs and distributed manufacturing employing readily available motors and drive electronics mean that, in the words of open source advocate Bruce Perens, "innovation has gone public".  (For more on that meme, see Perens' The Emerging Economic Paradigm of Open Source.)  As a result, there is no physical means available to law enforcement, or to anyone else, to either control access to drones or to control their use.  Combining wide access to hardware with inevitably open-source control code will produce a profusion of drone swarms. And yet some authorities will inevitably try to restrict access and use of drones, both in the name of public safety and to maintain a technological edge over putative scofflaws.  Up next: what if attempts at regulation just make things worse?

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

Are These The Drones We're Looking For? (Part I)

Drones for Destruction, Construction, and Distribution

Drones, it seems, are everywhere. The news is full of the rapidly expanding use of drones in combat.   The U.S. government uses drones daily to gather intelligence and to kill people.   Other organizations, ranging from organized militaries in China, Israel, and Iran to militias like Hezbollah, aspire to possess similar capabilities.  Amateurs are in the thick of it, too; if a recent online video is to be believed, a few months of effort is all that is necessary to develop a DIY drone capable of deploying DIY antipersonnel ordinance.

Lest we think drones are only used to create mayhem, they are used to create beauty.  Last year's lovely art project Flight Assembled Architecture employed a centrally-controlled swarm of small drones to build a complex, curving tower 6 meters tall.  Operating in a highly controlled environment, fully outfitted with navigational aides, each drone had to position itself precisely in six degrees of freedom (three in space, and three in rotation) in order to place each building block.  As our urban areas become sensor-rich environments, drones will soon have these remarkable navigational capabilities just about anywhere people live at high densities, namely urban environments.

To understand the future capabilities of drones, you need merely think of them as flying smartphones running apps.  That's not a great leap, because smartphones are already used as the brains for some drones.  The availability of standard iPhones and Android phones has enabled a thriving market of third-party apps that provide ever new capabilities to the user.  Drone platforms will benefit from analogous app development.  Moreover, as hardware improves, so will the capabilities of apps.  For example, Broadcom recently announced a new chip that enables the integration of multiple kinds of signals -- GPS, magnetometer, altimeter, wi-fi, cell phone tower, gyroscopes, etc. -- and that "promises to indicate location ultra-precisely, possibly within a few centimeters, vertically and horizontally, indoors and out."  The advertised application of that chip is for cell phones, but you can be sure the chips will find their way into drones, if only via cell phones, and will then quickly be utilized by guidance apps.  Whatever the drone mission may be, there will be an app for that.

When those individual, sensor-laden drones can cooperate, things get even more interesting.   Vijay Kumar's recent TED talk has must-see video of coordinated swarms of quad-rotor drones.  The drones, built at the GRASP Lab at the University of Pennsylvania, fly in formation, map outdoor and indoor environments, and as an ensemble play music on oversized instruments (see Double-O-Drone).  As you watch the videos, pay close attention to how well the drones understand their own position and speed, and how that information improves their flight capabilities.  When equipped with GPS and other sorts of sensors, drones are clearly capable of not just finding their way around complex environments but also of manipulating those environments.  At the moment, the drones' brains are actually in a stationary computer, with both sensory data and flight instructions wirelessly broadcast to and fro.  Moore's Law guarantees that those brains - including derivatives of the aforementioned Broadcom chip - will soon reside on the drones, thereby enabling real-time, local control, which will be necessary for autonomous operations at any real distance from home base.  The drones will become birds.  But these birds will have vertical take-off and landing (VTOL) capabilities, substantial load-carrying capacity, and will be able to work together towards ends set by humans.

A company called Matternet is already planning to exploit these capabilities.  The company's initial business model involves transporting goods in developing countries that lack adequate infrastructure.  If this strategy is successful, and if it can be scaled up, it will negate the need to build much of the fixed infrastructure that exists in the developed world.  It is a 21st century version of the Pony Express: think packet-switching, which makes the internet work efficiently, but for atoms rather than for bits.

Matternet plans that the first goods moved this way will be small, high value, perishables like pharmaceuticals.  But cargo size needn't be limited.  As Vijay Kumar pointed out in his TED talk, drones can cooperate to lift and transport larger objects.  While undoubtedly power or fuel will constrain some of these plans until technology catches up to aspirations, drones will inevitably be used to move larger and larger objects over longer and longer distances.  The technology will also be used very soon in the U.S.  The FAA has been directed to come up with rules for commercial drone use by 2015, and must sort out how to enable emergency agencies to use drones in 2012.  There are already 61 organizations in the U.S. with permission to fly drones in civilian airspace.  Yet rather less thought has been given to drone use outside the rules.  We are planning for drones, after a fashion, but what about after they arrive?

(Part 1, Drones for Destruction, Construction, and DistributionPart II, Pirate Hunting in the CloudsPart III, Photos, Bullets, and SmugglingPart IV, The Coming War Overhead)

Further Thoughts on iGEM 2011

Following up on my post of several weeks ago (iGEM 2011: First Thoughts), here is a bit more on last year's Jamboree.  I remain very, very impressed by what the teams did this year.  And I think that watching iGEM from here on out will provide a sneak peak of the future of biological technologies.

I think the biggest change from last year is the choice of applications, which I will describe below.  And related to the choice of applications is change of approach to follow a more complete design philosophy.  I'll get to the shift in design sensibility further on in the post.

The University of Washington: Make it or Break it

I described previously the nuts and bolts of the University of Washington's Grand Prize winning projects.  But, to understand the change in approach (or perhaps change in scope?) this project represents, you also have to understand a few details about problems in the real world.  And that is really the crux of the matter -- teams this year took on real world problems as never before, and may have produced real world solutions.

Recall that one of the UW projects was the design of an enzyme that digests gluten, with the goal of using that enzyme to treat gluten intolerance.  Candidate enzymes were identified through examining the literature, with the aim of finding something that works at low pH.  The team chose a particular starter molecule, and then used the "video game" Foldit to re-design the active site in silico so that it would chew up gluten (here is a very nice Youtube video on the Foldit story from Nature).  They then experimentally tested many of the potential improvements.  The team wound up with an enzyme that in a test tube is ~800 times better than one already in clinical trials.  While the new enzyme would of course itself face lengthy clinical trials, the team's achievement could have an enormous impact on people who suffer from celiac disease, among many other ailments.

From a story in last week's NYT Magazine ("Should We All Go Gluten-Free?"), here are some eye-opening stats on celiac disease, which can cause symptoms ranging from diarrhea to dramatic weight loss:

  • Prior to 2003, prevalence in the US was thought to be just 1 in 10,000: widespread testing revealed the actual rate was 1 in 133.
  • Current estimates are that 18 million Americans have some sort of gluten intolerance, which is about 5.8% of the population.
  • Young people were 5x more likely to have the disease by the 1990s than in the 1950s based on looking at old blood samples.
  • Prevalence is increasing not just in US, but also worldwide.

In other words, celiac disease is a serious metabolic issue that for some reason is affecting ever larger parts of the global population.  And as a summer project a team of undergraduates may have produced a (partial) treatment for the disease.  That eventual treatment would probably require tens of millions of dollars of further investment and testing before it reaches the market.  However, the market for gluten-free foods, as estimated in the Times, is north of $6 billion and growing rapidly.  So there is plenty of market potential to drive investment based on the iGEM project.

The other UW project is a demonstration of using E. coli to directly produce diesel fuel from sugar.  The undergraduates first reproduced work published last year from LS9 in which E. coli was modified to produce alkanes (components of diesel fuel -- here is the Science paper by Schirmer et al).  Briefly, the UW team produced biobricks -- the standard format used in iGEM -- of two genes that turn fatty acids into alkanes.  Those genes were assembled into a functional "Petrobrick".  The team then identified and added a novel gene to E. coli that builds fatty acids from 3 carbon seeds (rather than the native coli system that builds on 2 carbon seeds).  The resulting fatty acids then served as substrates for the Petrobrick, resulting in what appears to be the first report anywhere of even-chain alkane synthesis.  All three genes were packaged up into the "FabBrick", which contains all the components needed to let E. coli process sugar into a facsimile of diesel fuel.

The undergraduates managed to substantially increase the alkane yield by massaging the culture conditions, but the final yield is a long way from being useful to produce fuel at volume.  But again, not bad for a summer project.  This is a nice step toward turning first sugar, then eventually cellulose, directly into liquid fuels with little or no purification or post-processing required.  It is, potentially, also a step toward "Microbrewing the Bioeconomy".  For the skeptics in the peanut gallery, I will be the first to acknowledge that we are probably a long way from seeing people economically brew up diesel in their garage from sugar.  But, really, we are just getting started.  Just a couple of years ago people thought I was all wet forecasting that iGEM teams would contribute to technology useful for distributed biological manufacturing of fuels.  Now they are doing it.  For their summer projects.  Just wait a few more years.

Finally -- yes, there's more -- the UW team worked out ways to improve the cloning efficiency of so-called Gibson cloning.  They also packaged up as biobricks all the components necessary to produce magnetosomes in E. coli.  The last two projects didn't make it quite as far as the first two, but still made it further than many others I have seen in the last 5 years.

Before moving on, here is a thought about the mechanics of participating in iGEM.  I think the UW wiki is the about best I have seen.   I like very much the straightforward presentation of hypothesis, experiments, and results.  It was very easy to understand what they wanted to do, and how far they got.  Here is the "Advice to Future iGEM Teams" I posted a few years ago.  Aspiring iGEM teams should take note of the 2011 UW wiki -- clarity of communication is part of your job.

Lyon-INSA-ENS: Cobalt Buster

The team from Lyon took on a very small problem: cleaning up cooling water from nuclear reactors using genetically modified bacteria.  This was a nicely conceived project that involved identifying a problem, talking to stakeholders, and trying to provide a solution.  As I understand it, there are ongoing discussions with various sponsors about funding a start-up to build prototypes.  It isn't obvious that the approach is truly workable as a real world solution -- many questions remain -- but the progress already demonstrated indicates that dismissing this project would be premature.

Before continuing, I pause to reflect on the scope of Cobalt Buster.  One does wonder about the eventual pitch to regulators and the public: "Dear Europe, we are going to combine genetically modified organisms and radiation to solve a nuclear waste disposal problem!"  As the team writes on its Human Practices page: "In one project, we succeed to gather Nuclear Energy and GMOs. (emphasis in original)"  They then acknowledge the need to "focus on communication".  Indeed.

Here is the problem they were trying to solve: radioactive Cobalt (Co) is a contaminant emitted during maintenance of nuclear reactors.  The Co is typically cleaned up with ion exchange resins, which are both expensive and when used up must be appropriately disposed of as nuclear waste.  By inserting a Co importer pump into E. coli, the Lyon team hopes to use bacteria to concentrate the Co and thereby clean up reactor cooling water.  That sounds cool, but the bonus here is that modelling of the system suggests that using E. coli as a biofilter in this way would result in substantially less waste.  The team reports that they expect 8000kg of ion exchange resins could be replaced with 4kg of modified bacteria.  That factor of 2000 in volume reduction would have a serious impact on disposal costs.  And the modified bug appears to work in the lab (with nonradioactive Cobalt), so this story is not just marketing.

The Lyons team also inserted a Co sensor into their E. coli strain.  The sensor then drove expression of a protein that forms amyloid fibers, causing the coli in turn to form a biofilm.  This biofilm would stabilize the biofilter in the presence of Co.  The filter would only be used for a few hours before being replaced, which would not give the strain enough time to lose this circuit via selection.

Imperial College London: Auxin

Last, but certainly not least, is the very well thought through Imperial College project to combat soil erosion by encouraging plant root growth.  I saved this one for last because, for me, the project beautifully reflects the team's intent to carefully consider the real-world implications of their work.  There are certainly skeptics out there who will frown on the extension of iGEM into plants, and who feel the project would never make it into the field due to the many regulatory barriers in Europe.  I think the skeptics are completely missing the point.

To begin, a summary of the project: the Imperial team's idea was to use bacteria as a soil treatment, applied in any number of ways, that would be a cost-effective means of boosting soil stability through root growth.  The team designed a system in which genetically modified bacteria would be attracted to plant roots, would then take up residence in those roots, and would subsequently produce a hormone that encourages root growth.

The Auxin system was conceived to combine existing components in very interesting ways.  Naturally-occurring bacteria have already been shown to infiltrate plant roots, and other soil-dwelling bacteria produce the same growth hormone that encourages root proliferation.

Finally, the team designed and built a novel (and very clever) system for preventing leakage of transgenes through horizontal gene transfer.  On the plasmid containing the root growth genes, the team also included genes that produce proteins toxic to bacteria.  But in the chromosome, they included an anti-toxin gene.  Thus if the plasmid were to leak out and be taken up by a bacterium without the anti-toxin gene, any gene expression from the plasmid would kill the recipient cell.

The team got many of these pieces working independently, but didn't quite get the whole system working together in time for the international finals.  I encourage those interested to have a look at the wiki, which is really very good.

The Shift to Thinking About Design

As impressive as Imperial's technical results were, I was also struck by the integration of "human practices" into the design process.  The team spoke to farmers, economists, Greenpeace -- the list goes on -- as part of both defining the problem and attempting to finesse a solution given the difficulty of fielding GMOs throughout the UK and Europe.  And these conversations very clearly impacted the rest of the team's activities.

One of the frustrations felt by iGEM teams and judges alike is that "human practices" has often felt like something tacked on to the science for the sake of placating potential critics.  There is something to that, as the Ethical, Legal, and Social Implications (ELSI) components of large federal projects such as The Human Genome Project and SynBERC appear to have been tacked on for just that reason.  Turning "human practices" into an appendix on the body of science is certainly not the wisest way to go forward, for reasons I'll get to in a moment, nor is it politically savvy in the long term.  But if the community is honest about it, tacking on ELSI to get funding has been a successful short-term political hack.

The Auxin project, along with a few other events during the finals, helped crystallize for me the disconnect between thinking about "human practices" as a mere appendix while spouting off about how synthetic biology will be the core of a new industrial revolution, as some of us tend to do.  Previous technological revolutions have taught us the importance of design, of thinking the whole project through at the outset in order to get as much right as possible, and to minimize the stuff we get wrong.  We should be bringing that focus on design to synthetic biology now.

I got started down this line of thought during a very thought-provoking conversation with Dr. Megan Palmer, the Deputy Director for Practices at SynBERC.  (Apologies to you, Megan, if I step your toes in what follows -- I just wanted to get these thoughts on the page before heading out the door for the holidays.)  The gist of my chat with Megan was that the focus on safety and security as something else, as an activity separate from the engineering work of SB, is leading us astray.  The next morning, I happened to pass Pete Carr and Mac Cowell having a chat just as one of them was saying, "The name human practices sucks. We should really change the name."  And then my brain finally -- amidst the jet lag and 2.5 days of frenetic activity serving as a judge for iGEM -- put the pieces together.  The name does suck.  And the reason it sucks is that it doesn't really mean anything.

What the names "human practices" and "ELSI" are trying to get at is the notion that we shouldn't stumble into developing and using a powerful technology without considering the consequences.  In other fields, whether you are thinking about building a chair, a shoe, a building, an airplane, or a car, in addition to the shape you usually spend a great deal of time thinking about where the materials come from, how much the object costs to make, how it will be used, who will use it, and increasingly how it will be recycled at end of use.  That process is called design, and we should be practicing it as an integral part of manipulating biological systems.

When I first started as a judge for iGEM, I was confused by the kind of projects that wound up receiving the most recognition.  The prizes were going to nice projects, sure, but those projects were missing something from my perspective.  I seem to recall protesting at some point in that first year that "there is an E in iGEM, and it stands for Engineering."  I think part of that frustration was the pool of judges was dominated for many years by professors funded by the NIH, NRC, or the Welcome Trust, for example -- scientists who were looking for scientific results they liked to grace the pages of Science or Nature -- rather than engineers, hackers, or designers who were looking for examples of, you know, engineering.

My point is not that the process of science is deficient, nor that all lessons from engineering are good -- especially as for years my own work has fallen somewhere in between science and engineering.  Rather, I want to suggest that, given the potential impact of all the science and engineering effort going into manipulating biological systems, everyone involved should be engaging in design.  It isn't just about the data, nor just about shiny objects.  We are engaged in sorting out how to improve the human condition, which includes everything from uncovering nature's secrets to producing better fuels and drugs.  And it is imperative that as we improve the human condition we do not diminish the condition of the rest of the life on this planet, as we require that life to thrive in order that we may thrive.

Which brings me back to design.  It is clear that not every experiment in every lab that might move a gene from one organism to another must consider the fate of the planet as part of the experimental design.  Many such experiments have no chance of impacting anything outside the test tube in which they are performed.  But the practice of manipulating biological systems should be done in the context of thinking carefully about what we are doing -- much more carefully than we have been, generally speaking.  Many fields of human endeavor can contribute to this practice.  There is a good reason that ELSI has "ethical", "legal", and "social" in it.

There have been a few other steps toward the inclusion of design in iGEM over the years.  Perhaps the best example is the work designers James King and Daisy Ginsburg did with the 2009 Grand Prize Winning team from Cambridge (see iGEM 2009: Got Poo?).  That was lovely work, and was cleverly presented in the "Scatalog".  You might argue that the winners over the years have had increasingly polished presentations, and you might worry that style is edging out substance.  But I don't think that is happening.  The steps taken this year by Imperial, Lyon, and Washington toward solving real-world problems were quite substantive, even if those steps are just the beginning of a long path to get solutions into people's hands.  That is the way innovation work s in the real world.

Diffusion of New Technologies

A Tweet and blog post from Christina Cacioppo about technological diffusion led me to dig out a relevant slide and text from my book.  Ms. Cacioppo, reflecting on a talk she just saw, asks "Are we really to believe there was no "new" technology diffusion between 1950 and 1990? I thought this was the US's Golden Age of Growth. (Should we include penicillin, nuclear power, or desktop computers on this chart?)".  There is such data out there, but it can be obscure.

As it happens, thanks to my work with bio-era, I am familiar with a 1997 Forbes piece by Peter Brimlow that explores what he called "The Silent Boom".  Have a look at the text (the accompanying chart is not available online), but basically the idea is that keeping track of the cost of a technology is less informative than tracking actual market penetration, which is sometimes called "technological diffusion".  The time between the introduction of a technology and widespread adoption is a "diffusion lag".  The interesting thing for me is that there appears to be a wide distribution of diffusion lags; that is, some technologies hit the market fast (which can still mean decades) while others can take many more decades.  There really isn't enough data to say anything concrete about how diffusion lags are changing over time, but I am willing to speculate that not only are the lags getting shorter (more rapid market adoption), but that the pace of adoption is getting faster (steeper slope).  Here is the version of the chart I use in my talks, followed by a snippet of related text from my book (I am sure there is a better data set out there, but I have not yet stumbled over it):

carlson_silent_boom.png
And from pg 60 of Biology is Technology:

Diffusion lags in acceptance appear frequently in the adoption of new technologies over the last several centuries. After the demonstration of electric motors, it took nearly two decades for penetration in U.S. manufacturing to reach 5% and another two decades to reach 50%. The time scale for market penetration is often decades[6] (see Figure 5.6). There is, however, anecdotal evidence that adoption of technologies may be speeding up; "Prices for electricity and motor vehicles fell tenfold over approximately seven decades following their introduction. Prices for computers have fallen nearly twice as rapidly, declining ten-fold over 35 years."[4]

Regardless of the time scale, technologies that offer fundamentally new ways of providing services or goods tend to crop up within contexts set by preceding revolutions. The interactions between the new and old can create unexpected dynamics, a topic I will return to in the final chapter. More directly relevant here is that looking at any given technology may not give sufficient clues as to the likely rate of market penetration. For example, while the VCR was invented in 1952, adoption remained minimal for several decades. Then, in the late 1970's the percentage of ownership soared. The key underlying change was not that consumers suddenly decided to spend more time in front of the television, but rather that a key component of VCRs, integrated circuits, themselves only a few decades old at the time, started falling spectacularly in price. That same price dynamic has helped push the role of integrated circuits into the background of our perception, and the technology now serves as a foundation for other "independent" technologies ranging from mobile phones, to computers, to media devices.