I'll be in Europe and the UK for all of May, based in Cambridge. I'll be making trips to London, Edinburgh, and Paris, and probably The Netherlands, all to give talks and visit with iGEM teams and other students.
Anybody interested in chatting?
Your Custom Text Here
I'll be in Europe and the UK for all of May, based in Cambridge. I'll be making trips to London, Edinburgh, and Paris, and probably The Netherlands, all to give talks and visit with iGEM teams and other students.
Anybody interested in chatting?
There appears to be uncertainty over just which genes are in the H1N1 genome now causing illness.
(Update: Must read for anyone interested in the present situation: the CIDRAP Swine Influenza Overview.)
As of the evening of Tuesday, 28 April, CNN is reporting that:
The new virus has genes from North American swine influenza, avian influenza, human influenza and a form of swine influenza normally found in Asia and Europe, said Nancy Cox, chief of the CDC's Influenza Division.
However, today's ProMED mail contained a the following exchange.
From Professor Roger Morris, at Massey University, New Zealand, a whole bunch of really good questions:
For those of us who are involved in international work on influenza epidemiology and control and responding to the many media enquiries, there is a very large information gap in relation to diagnosis and epidemiology of the Mexican influenza. What is known of the genetic structure of this virus? It has been called a swine flu, but no evidence has been put forward to allow this statement to be evaluated. I have received information that it is a reassortant, which has genetic components from 4 different sources, but nothing official has been released on this. Where does it fit phylogenetically? Is there any genetic variation of significance among the isolates investigated? Would this help to explain the difference in severity of disease between Mexico and other countries?
It is also stated that it should be diagnosed by RT-PCR, without clarifying which PCR. I have received information that the standard PCR for H1 does not reliably detect this virus. Is this true? What is an appropriate series of diagnostic steps for samples from suspect cases? Could we have an authoritative statement on these issues from one of the laboratories, which has been working with the virus?
In response, here is Professor Paul Rabadan, of Columbia University College of Physicians and Surgeons, who is digging into the flu genome sequences filed at NCBI and finds that the sequence appears to be solely of swine (swinian?) origin:
In relation to the questions posed by Prof. Morris: My group and I are analyzing the recent sequences from the isolates in Texas and California of swine H1N1 deposited in National Center for Biotechnology Information (NCBI) (A/California/ 04/2009(H1N1), A/California/05/2009(H1N1), A/California/ 06/2009(H1N1), A/California/07/2009(H1N1), A/California/09/2009(H1N1), A/Texas/04/2009(H1N1) and A/Texas/05/2009(H1N1).
The preliminary analysis using all the sequences in public databases (NCBI) suggests that all segments are of swine origin. NA and MP seem related to Asian/European swine and the rest to North American swine (H1N2 and H3N2 swine viruses isolated since 1998). There is also interesting substratification between these groups, suggesting a multiple reassortment.
We are puzzled about sources of information that affirm that the virus is a reassortment of avian, human and swine viruses. It is true that the H3N2 swine virus from 1998 and 1999 is a triple reassortant, but all the related isolates are found since then in swine.
In lay English: the virus is composed of pieces of other viruses found in pigs. While the structure of the genome is curious, in that it appears the different viruses exchanged chromosomes multiple times, there isn't any sign that the present genome of concern contains elements of avian or human flu viruses.
(Update: I just stumbled over a 21 April CDC briefing that describes the genomes of H1N1 viruses in pediatric cases in California as entirely of swine origin.)
So it isn't at all clear why the press (and government officials) keep repeating the assertion that the new virus is some sort of amazing Frankenstein strain. The message containing Professor Rabadan's comments also notes that a mess of new sequences from clinical isolates were filed today in the GISAID database. Analysis of those sequences should help clarify the origin -- or at least the composition of the genome -- of the virus in the coming days.
The press also continues to bray about flies as the vector, when there is no evidence I can find in any literature, anywhere, that suggests flies have ever been associated with transmitting the flu. If this particular bug did figure out how to hitch a ride of flies, that would be some seriously scary evolutionary juju. Intelligent design, even. We would all be in deep trouble. But, as there is no evidence to support these assertions other than repeating what other reporters are saying, my recommendation to all you in the press would be simply this: STOP.
Similarly, the notion that at this early date anyone could possibly have identified the index case ("Patient 0") as a young boy in some village in Mexico is -- let me choose my words very carefully here -- COMPLETE PIGSHIT. With so little molecular forensics done on the virus, and no real map of who is actually sick, who has been sick, nor when or where they were sick, publishing the name of an innocent four-year old boy based on cribbing from some other reporter's story is the height of irresponsible journalism. Where the fuck are the editors?
(Update: The New York Times is still repeating this nonesense: "...The Mexican government has identified a young boy as the first person in the country infected with swine flu...". Waaay down in the story it acknowledges that the village the boy is from "may not, in the end, be found to be the source of anything" and then goes on to describe earlier potential cases. Oy.)
Perhaps reporters should try a little, oh, I don't know, reporting. Visit ProMED mail. Check out CIDRAP and Effect Measure. Stop reading what other reporters write, and think for yourseves. We will all be better off.
Well, it looks like we got surprised. Just like we, um, expected. To be surprised, that is.
It's been quite a while since I wrote anything about the flu, but I suppose I should start keeping track of interesting new developments.
We should consider the clock started on vaccine development. Various reports suggest that Baxter is already at work at the request of the Mexican government. News outlets are being very careless, throwing around phrases like "vaccines are at least six months away", when it would surprise me if anything became available in less than nine months. I expect it to be more like 12-18 months, but I really, truly, hope I am wrong about this. All of a sudden we are doing a real-world test of our preparedness.
There is excellent coverage, as usual, over at EffectMeasure. Other reporting is sort of spotty. I keep seeing stories (Wired, CNN, even the NYT) reporting that the CDC says vomiting and diarrhea are symptoms of the flu, when what the CDC says is that "some people report" those symptoms for the flu. Usually GI tract symptoms like that are due to noroviruses (think cruise ships), not influenza viruses. But I suppose we could be seeing something new.
I just heard a report from the BBC suggesting that Mexico thinks as many as 2000 people have been infected, with Mexico's health minister putting the death toll at 149. That would put the fatality rate at 7.5%, which would be extremely high for the flu. It is too early to say whether those numbers are realistic or not, especially since Mexico will have difficulty making positive molecular diagnoses. I would expect a retrospective analysis of this outbreak to determine that many, many more people have been exposed and infected than presently reported. It is certainly confusing why all the deaths have thus far been confined to Mexico.
It seems that cases are already spread across the world. Here is a Google Maps version of suspected and confirmed cases, which looks to be maintained by Henry Niman. Good show Dr. Niman, even though I haven't always seen eye to eye with you on your ideas about the flu and SARS. Niman seems to be maintaining a bunch of other such maps, which are worth checking out, including H5N1 in Egypt and ... "SARS 2009" -- WTF!!!
*shudder*
Back to H1N1: According to this ProMED summary, Israel is taking the most important step it can in preparing:
Israel renames unkosher swine flu.
Israel's health minister updates a nervous public about the swine flu
epidemic - and starts by renaming it Mexican flu.
Perhaps my slight turn to appreciating black humor here is that I just don't see that things have improved very much since 2005. In mid-February of this year, I sat around a table in DC with a bunch of people who had been called together to discuss biopreparedness, whether for natural or artificial threats. The person convening the meeting suggested that basically everyone who deeply cared about the issue in DC was in the room, and it was a disturbingly small group.
Also disturbing was what those people reported about their experiences in trying to prepare the US for the inevitable appearance of biothreats. The news wasn't encouraging. Another anecdote for context -- in 2005 I had a conversation with the head of Asian operations for one of the two remaining international express shipping companies. At that time, his company hadn't given much thought to the flu -- this was before all the hullaballoo -- and he suggested should H5N1 become a problem that the company would simply stop flying. An executive from a major disposable syringe manufacturer then suggested there would be no way to keep up with demand if that shipping stopped. I went on to write here, and elsewhere, about what might happen to not just our economy, but also our R&D efforts, if plastic labware and rubber gloves made in Asia were stuck there. I can report that, as of February this year, there are at least a few stockpiles of critical supplies here in the States, but that the academics, state, and federal officials around that table in DC were far less than sanguine about our state of preparedness. One professor, who was running an ongoing assessment of his state's preparedness, suggested that they were still having trouble getting the basic data they needed on the available stock of consumables in hospitals.
I have been concentrating on other topics for the last eighteen months or so, and so I raised my hand to express my incredulous dismay that things haven't improved in 4 years. That generated an interesting response. About half the room assured me it was okay, and the other half assured me my dismay was entirely warranted. Great.
Thus my slightly foul mood as a new potential threat is rapidly finding its way around the globe. That and the fact that I am about to climb into an airplane bound for the UK -- eight hours in a closed environment with hundreds of international travelers at the beginning of a potential epidemic. Oh, joy.
Where's my Tamiflu?
I am catching up on past issues of Nature Biotech. Here are a few things that caught my eye:
(Feb 09) Cuba is launching a domestically produced GM corn. The strain (which looks from the name to contain Bt) is to be used in animal feed. Another sign that developing countries view biotech as important national initiatives, and that they can push the technology on their own.
(Feb 09) Researchers in Belgium got fed up with efforts to get their field trial for GM poplars approved in country, and are taking the trial to the Netherlands. So much for uniformly applying laws on planting GM crops in Europe. (Mar 09) Local environment ministers voted to overturn the European Commission's initiative to force member states to lift national bans.
(April 09) Malaysia has dropped several billions of dollars on biotech as part of their stimulus package. More on this when I dig into it.
While writing a proposal for a new project, I've had occasion to dig back into Moore's Law and its origins. I wonder, now, whether I peeled back enough of the layers of the phenomenon in my book. We so often hear about how more powerful computers are changing everything. Usually the progress demonstrated by the semiconductor industry (and now, more generally, IT) is described as the result of some sort of technological determinism instead of as the result of a bunch of choices -- by people -- that produce the world we live in. This is on my mind as I continue to ponder the recent failure of Codon Devices as a commercial enterprise. In any event, here are a few notes and resources that I found compelling as I went back to reexamine Moore's Law.
What is Moore's Law?
First up is a 2003 article from Ars Technica that does a very nice job of explaining the why's and wherefore's: "Understanding Moore's Law". The crispest statement within the original 1965 paper is "The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year." At it's very origins, Moore's Law emerged from a statement about cost, and economics, rather than strictly about technology.
I like this summary from the Ars Technica piece quite a lot:
Ultimately, the number of transistors per chip that makes up the low point of any year's curve is a combination of a few major factors (in order of decreasing impact):
- The maximum number of transistors per square inch, (or, alternately put, the size of the smallest transistor that our equipment can etch),
- The size of the wafer
- The average number of defects per square inch,
- The costs associated with producing multiple components (i.e. packaging costs, the costs of integrating multiple components onto a PCB, etc.)
In other words, it's complicated. Notably, the article does not touch on any market-associated factors, such as demand and the financing of new fabs.
The Wiki on Moore's Law has some good information, but isn't very nuanced.
Next, here an excerpt from an interview Moore did with Charlie Rose in 2005:
Charlie Rose: ...It is said, and tell me if it's right, that this was part of the assumptions built into the way Intel made it's projections. And therefore, because Intel did that, everybody else in the Silicon Valley, everybody else in the business did the same thing. So it achieved a power that was pervasive.
Gordon Moore: That's true. It happened fairly gradually. It was generally recognized that these things were growing exponentially like that. Even the Semiconductor Industry Association put out a roadmap for the technology for the industry that took into account these exponential growths to see what research had to be done to make sure we could stay on that curve. So it's kind of become a self-fulfilling prophecy.
Semiconductor technology has the peculiar characteristic that the next generation always makes things higher performance and cheaper - both. So if you're a generation behind the leading edge technology, you have both a cost disadvantage and a performance disadvantage. So it's a very non-competitive situation. So the companies all recognize they have to stay on this curve or get a little ahead of it.
Keeping up with 'the Law' is as much about the business model of the semiconductor industry as about anything else. Growth for the sake of growth is an axiom of western capitalism, but it is actually a fundamental requirement for chipmakers. Because the cost per transistor is expected to fall exponentially over time, you have to produce exponentially more transistors to maintain your margins and satisfy your investors. Therefore, Intel set growth as a primary goal early on. Everyone else had to follow, or be left by the wayside. The following is from the recent Briefing in The Economist on the semiconductor industry:
...Even the biggest chipmakers must keep expanding. Intel todayaccounts for 82% of global microprocessor revenue and has annual revenues of $37.6 billion because it understood this long ago. In the early 1980s, when Intel was a $700m company--pretty big for the time--Andy Grove, once Intel's boss, notorious for his paranoia, was not satisfied. "He would run around and tell everybody that we have to get to $1 billion," recalls Andy Bryant, the firm's chief administrative officer. "He knew that you had to have a certain size to stay in business."
Grow, grow, grow
Intel still appears to stick to this mantra, and is using the crisis to outgrow its competitors. In February Paul Otellini, its chief executive, said it would speed up plans to move many of its fabs to a new, 32-nanometre process at a cost of $7 billion over the next two years. This, he said, would preserve about 7,000 high-wage jobs in America. The investment (as well as Nehalem, Intel's new superfast chip for servers, which was released on March 30th) will also make life even harder for AMD, Intel's biggest remaining rival in the market for PC-type processors.
AMD got out of the atoms business earlier this year by selling its fab operations to a sovereign wealth fund run by Abu Dhabi. We shall see how they fare as a bits-only design firm, having sacrificed their ability to themselves push (and rely on) scale.
Where is Moore's Law Taking Us?
Here are a few other tidbits I found interesting:
Re the oft-forecast end of Moore's Law, here is Michael Kanellos at CNET grinning through his prose: "In a bit of magazine performance art, Red Herring ran a cover story on the death of Moore's Law in February--and subsequently went out of business."
And here is somebody's term paper (no disrespect there -- it is actually quite good, and is archived at Microsoft Research) quoting an interview with Carver Mead:
Carver Mead (now Gordon and Betty Moore Professor of Engineering and Applied Science at Caltech) states that Moore's Law "is really about people's belief system, it's not a law of physics, it's about human belief, and when people believe in something, they'll put energy behind it to make it come to pass." Mead offers a retrospective, yet philosophical explanation of how Moore's Law has been reinforced within the semiconductor community through "living it":
After it's [Moore's Law] happened long enough, people begin to talk about it in retrospect, and in retrospect it's really a curve that goes through some points and so it looks like a physical law and people talk about it that way. But actually if you're living it, which I am, then it doesn't feel like a physical law. It's really a thing about human activity, it's about vision, it's about what you're allowed to believe. Because people are really limited by their beliefs, they limit themselves by what they allow themselves to believe what is possible. So here's an example where Gordon [Moore], when he made this observation early on, he really gave us permission to believe that it would keep going. And so some of us went off and did some calculations about it and said, 'Yes, it can keep going'. And that then gave other people permission to believe it could keep going. And [after believing it] for the last two or three generations, 'maybe I can believe it for a couple more, even though I can't see how to get there'. . . The wonderful thing about [Moore's Law] is that it is not a static law, it forces everyone to live in a dynamic, evolving world.
So the actual pace of Moore's Law is about expectations, human behavior, and, not least, economics, but has relatively little to do with the cutting edge of technology or with technological limits. Moore's Law as encapsulated by The Economist is about the scale necessary to stay alive in the semiconductor manufacturing business. To bring this back to biological technologies, what does Moore's Law teach us about playing with DNA and proteins? Peeling back the veneer of technological determinism enables us (forces us?) to examine how we got where we are today.
A Few Meandering Thoughts About Biology
Intel makes chips because customers buy chips. According to The Economist, a new chip fab now costs north of $6 billion. Similarly, companies make stuff out of, and using, biology because people buy that stuff. But nothing in biology, and certainly not a manufacturing plant, costs $6 billion.
Even a blockbuster drug, which could bring revenues in the range of $50-100 billion during its commercial lifetime, costs less than $1 billion to develop. Scale wins in drug manufacturing because drugs require lots of testing, and require verifiable quality control during manufacturing, which costs serious money.
Scale wins in farming because you need...a farm. Okay, that one is pretty obvious. Commodities have low margins, and unless you can hitch your wagon to "eat local" or "organic" labels, you need scale (volume) to compete and survive.
But otherwise, it isn't obvious that there are substantial barriers to participating in the bio-economy. Recalling that this is a hypothesis rather than an assertion, I'll venture back into biofuels to make more progress here.
Scale wins in the oil business because petroleum costs serious money to extract from the ground, because the costs of transporting that oil are reduced by playing a surface-to-volume game, and because thermodynamics dictates that big refineries are more efficient refineries. It's all about "steel in the ground", as the oil executives say -- and in the deserts of the Middle East, and in the Straights of Malacca, etc. But here is something interesting to ponder: oil production may have maxed out at about 90 million barrels a day (see this 2007 article in the FT, "Total chief warns on oil output"). There may be lots of oil in the ground around the world, but our ability to move it to market may be limited. Last year's report from Bio-era, "The Big Squeeze", observed that since about 2006, the petroleum market has in fact relied on biofuels to supply volumes above the ~90 million per day mark. This leads to an important consequence for distributed biofuel production that only recently penetrated my thick skull.
Below the 90 million barrel threshold, oil prices fall because supply will generally exceed demand (modulo games played by OPEC, Hugo Chavez, and speculators). In that environment, biofuels have to compete against the scale of the petroleum markets, and margins on biofuels get squeezed as the price of oil falls. However, above the 90 million per day threshold, prices start to rise rapidly (perhaps contributing to the recent spike, in addition to the actions of speculators). In that environment, biofuels are competing not with petroleum, but with other biofuels. What I mean is that large-scale biofuels operations may have an advantage when oil prices are low because large-scale producers -- particularly those making first-generation biofuels, like corn-based ethanol, that require lots of energy input -- can eke out a bit more margin through surface to volume issues and thermodynamics. But as prices rise, both the energy to make those fuels and the energy to move those fuels to market get more expensive. When the price of oil is high, smaller scale producers -- particularly those with lower capital requirements, as might come with direct production of fuels in microbes -- gain an advantage because they can be more flexible and have lower transportation costs (being closer to the consumer). In this price-volume regime, petroleum production is maxed out and small scale biofuels producers are competing against other biofuels producers since they are the only source of additional supply (for materials, as well as fuels).
This is getting a bit far from Moore's Law -- the section heading does contain the phrase "meandering thoughts" -- I'll try to bring it back. Whatever the origin of the trends, biological technologies appear to be the same sort of exponential driver for the economy as are semiconductors. Chips, software, DNA sequencing and synthesis: all are infrastructure that contribute to increases in productivity and capability further along the value chain in the economy. The cost of production for chips (especially the capital required for a fab) is rising. The cost of production for biology is falling (even if that progress is uneven, as I observed in the post about Codon Devices).&nb sp; It is generally becoming harder to participate in the chip business, and it is generally becoming easier to participate in the biology business. Paraphrasing Carver Mead, Moore's Law became an organizing principal of an industry, and a driver of our economy, through human behavior rather than through technological predestination. Biology, too, will only become a truly powerful and influential technology through human choices to develop and deploy that technology. But access to both design tools and working systems will be much more distributed in biology than in hardware. It is another matter whether we can learn to use synthetic biological systems to improve the human condition to the extent we have through relying on Moore's Law.
Nature is carrying a short news piece by Erica Check and Heidi Ledford on the end of Codon Devices, "The Constructive Biology Company". I am briefly quoted in the discussion of what might have gone wrong. I would add here that I don't think it means much of anything for the field as a whole. It was just one company. Here is last week's initial reporting by Todd Wallack at the Boston Globe.
I've been pondering this a bit more, and the following analogy occurred to me after I was interviewed for the Nature piece. Codon, as described to me by various people directly involved, was imagined as a full-service engineering firm -- synthetic genes and genomes, design services, the elusive "bio-fab" that would enable one-stop conversion of design information into functional molecules and living systems. Essentially, it seems to me that the founders wanted to spin up an HP of biology, except that they tried to jump into the fully developed HP of 1980 or 1990 rather than the garage HP of 1939. Codon was founded with of order $50 million, with no actual products ready to go. HP was founded with ~$500 (albeit 1939 dollars) and immediately started selling a single product, a frequency standard, for which there was a large and growing market. HP then grew, along with it customers, organically over decades. Moreover, the company was started within the context of an already large market for electronics.
The synthetic biology market -- the ecology of companies that produce and consume products and services related to building genes and genomes -- still isn't very big. A very generous estimate would put that market at $100 million. This means the revenues for any given firm are (very optimistically) probably no more than a few tens of millions. (The market around "old style" recombinant DNA is, of course, orders of magnitude larger.) Labor, rather than reagents and materials, is still likely to be the biggest cost for most companies in the field. And even when they do produce an organism, or a genetic circuit, with value, companies are likely to try to capture all the value of the learning that went into the design and engineering process.
This leads to an important question that I am not sure is asked often enough by those who hope to make a living off of emerging biological technologies: Where is the value? Is it in the design (bits), or in the objects (atoms)? The answer is a bit complicated.
Given that the maximum possible profit margin on synthetic genes is falling exponentially, it would seem that finding value in those particular atoms is going to get harder and harder. DNA is cheap, and getting cheaper; the design of genetic circuits (resulting in bits) definitely costs more (in labor, etc.) than obtaining the physical sequence by FedEx. That is the market that Codon leapt into. If all of the value is in the design process, and in the learning associated with producing a new design, not many companies are going to outsource that value creation to a contractor. If Codon had a particular design expertise, they could have made a go with that as a business model, as do electronics firms that have niche businesses in power electronics or ASICs. There are certainly very large firms that design, but do not build, electronics (the new AMD, for example), but they didn't get that way overnight. They have emerged after a very long (and brutal) process of competition that has resulted in the separation of design and manufacturing. Intel is the only integrated firm left standing, in part because they set their sights on maintaining scale from day one (see the recent Economist article on the semiconductor industry for a nice summary of where the market is, and where it may be headed).
In another area of synthetic biology, I can testify with an uncomfortably high degree of expertise that costs in the market for proteins (a very different beast than DNA) are much higher for atoms than for bits. It is relatively easy for me to design (update: perhaps better phraseology would be "specify the sequence of") a new protein for Biodesic and have Blue Heron synthesize the corresponding gene. It is rather less easy for me to get the actual protein made at scale by a third party (and it would be even harder to do it myself). Whereas gene synthesis appears to be a commodity business, contract protein manufacturing is definitely not. Expression and purification require knowledge (art). Even if a company has loads of expertise in protein expression, in my experience they will only offer an estimate of the likelihood of success for any given job. And even if they can make a particular protein, without a fairly large investment of time and money they may not be able to make very much of the protein or ship it at a sufficiently high purity. Unlike silicon processing and chip manufacturing, it isn't clear that anyone can (yet) be a generalist in protein expression. Once you get a protein manufacturing process sorted out, the costs quickly fall and the margins are excellent. Until then: ouch.
So, for DNA bits are expensive and atoms are cheap. For proteins, bits are cheap and atoms are initially very expensive. Who knows how much of this was clear to the founders of Codon several years ago; I have only been able to articulate these ideas myself relatively recently. It is still very early in the development of synthetic biology as a market, and as a sector of the economy.
Bloggingheads.tv has just posted the video of my conversation with Carl Zimmer, "Biology as Technology".
We covered quite a lot of ground. Check it out and drop a comment or a note if you have a question.
(Update: see "Revisiting Mood Hacking with Scents", 3 December 2009.)
We are all familiar with the aromas used by stores in the hopes of motivating consumer frenzy. Walk into some establishments and you may feel as if you have been smacked with a fragrant bunch of flowers. Or possibly a fragrant leather shoe. Maybe this actually encourages people to spend money. It usually just makes me sneeze.
But what if the general strategy of behavior modification via perfumes of one kind or another really does work? At the 2008 World Economic Forum in Davos, there was an explicit attempt to influence discussions through the use of custom scents designed for the occassion.
Here is a short excerpt from "Davos Aromas Deodorize Subprime Stench, Charm Dimon, Kissinger", by A. Craig Copetas (Bloomberg News):
"I know a lot of people think this is foolish,'' says Toshiko Mori, chairwoman of Harvard University's architecture department and one of the WEF delegates who initiated the perfume project. ``But the global economy is in dire straits and we must improve the quality of human spirits. Perfuming is a powerful tool in a much broader discourse. The fragrances will help us reach economic and political solutions at Davos.''
Here is CNN's take: "Smelly Davos unveils new world odor." Ha.
The reader might imagine a room full of national security professionals debating the merits and ethics of this "technology". We see two camps emerge. The first group is shocked -- shocked! -- that biochemical warfare is being brought indoors to induce in captains of industry and policy makers a mood of compromise. The second group notes that all it took to hack the mood of Boris Yeltsin was an open bottle of vodka. The latter strategy has, of course, been used for millennia.
Hacking a the mood of an entire room full of people at once is an interesting twist, though. What happens when someone modifies airborne rhinoviruses to express neuroactive peptides? (See my post on iGEM 2008: "Surprise -- the Future is Here Already".) Science fiction gave us the answer long ago. Isaac Asimov had his characters wearing anti-viral filters in their nostrils even in his early stories. Seems like filters with sufficiently small pores might make it hard to breathe. And what happens if you sneeze? "Ouch!" or "Ewww", I imagine.
Anyway, how would we even know that mood hacking was occurring? Aside from simply noting changes in behavior, or getting, um, wind of the threat via human intelligence, we would have to measure any chemical or biological weapon directly. But before pulling out the Tricorder and identifying a threat, we would first have to be constantly monitoring our environment in order to get a baseline of environmental signals. So, we have already struck out. No such monitoring is really happening. We are just cherry picking a few things that are easy to see. Oh, and still no Tricorder.
If the mood altering mechanism was delivered via a virus, we would have to not just monitor the number of viruses of any given species in the air, but also be sequencing all of them, all the time. Again, we are striking out.
I have a hard time imagining that viral mood hacking threats are going to show up any time soon, but then we have no means of knowing either way. Perhaps such things are already about. How can you be sure you aren't part of "The Giving Plague"?
(Update: McKinsey seems to have pulled the whole issue from the web, which is too bad because there was a lot of good stuff in it. The text of my contribution can be found below.)
I have a short essay in a special edition of the McKinsey Quarterly, What Matters. My piece is waaaay back at the end of the printed volume, and all the preceding articles are well worth a look. Other essayists include Steven Chu, Hal Varian, Nicholas Stern, Kim Stanley Robinson, Yochai Benkler, Vinod Khosla, Arianna Huffington, Joseph Nye, and many more. Good company.
Here is the essay: "The New Biofactories" (PDF), Robert Carlson, What Matters, McKinsey & Company, 2009.
Finally, the book is done. Aside from reviewing the proofs in a couple of months, and writing an afterword, it is at last out of my hands.
The title, finally, will be "Biology is Technology: The Promise, Peril, and Business of Engineering Life". It will be in the Fall 2009 Catalog from Harvard University Press, with atoms showing up at approximately New Years. I'll get around to updating the web site text eventually.
My brain is presently mush. I haven't blogged in so long I'd forgotten the user name and password for my account. I have a couple of posts in mind that I hope to get up over the weekend.
Otherwise, I can't wait to get back to actually doing science. What a concept.
First: sleep. No -- second sleep. First: beer.