Category Archives: science

More on cocktail thermodynamics

Debby found this post from Dave Arnold that resembles what I discussed in the last post. Shorter version: I’m probably wrong about the ice being close enough to thermal equilibrium for government work, but the explanation of what is going on isn’t quite the way I recall Weitz explaining it. And my old Intro Bio prof is still wrong.

Fact 1:Ice at 0°C can chill an alcoholic drink well below 0°C. This fact is counter-intuitive to many, but is an irrefutable consequence of the laws of thermodynamics.

Arnold does an experiment that is essentially what we saw on Friday night, using vodka instead of tequila. After pre-incubating ice in water, the water is drained off and vodka is added. He gets  slightly diluted vodka at -4.5 °C. He also did a measurement of how fast ice reaches thermal equilibrium:

Fact 2: Bar ice is almost always at 0°C unless it comes straight from the freezer. People have a hard time accepting this fact. As a test, I froze a large ice cube with a super-thin hypodermic thermocouple probe in the center.  I put that ice cube, along with some run-of-the-mill ice cubes for insulation, into a blast freezer for 4 hours until everything was at -20 C.  I then put the entire batch into a plastic container and waited.  In under 20 minutes, the large ice cube was within 0.5 degrees of zero.

In the comments to an earlier post, a reader wonders if he is measuring the core of the ice or surface water that develops around the temperature probe. Although the temperature might not be quite all the way up to zero for that reason, I suspect that the known conductive properties of ice mean that it’s closer to 0 than -20.

So why does this work and what’s wrong with my earlier analysis? Because I idiotically glossed over an important part of the Clausius version of the Second Law

Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time

The other change that is connected therewith to passage of heat is the breaking of bonds in the ice and the conversion of ice to water. Duh! As the vodka/tequila goes from 20 to 0, the heat is passing from the warmer liquid to the cooler solid ice, melting the latter with 333.55 J/g (80 cal/g). Melting a g of ice can chill 4 ml of water from room temp to 0.

But when the vodka/tequila is below 0 but above the depressed freezing point, melting reactions don’t stop. There are still a fraction of the total intermolecular collisions between the liquid and the ice that have sufficient energy to break the water-water bonds on the surface of the ice and melt off some of it (Not being an ideal gas, this won’t be a Maxwell-Boltzmann distribution, but there will be a mix of energies). This cools the liquid phase, just as it does when the collision occurs above 0°C.

Whether you are cooling water or liquor, the final temperature is still set by the freezing point, which is where the probability of a water molecule joining or leaving the crystal lattice is equal. So I think I’m still right that the freezing point depression is important. The ethanol isn’t doing anything to the heat exchange per se. It’s allowing us to see cooling below 0 when we measure the temperature of the liquid phase.

Arnold argues that the chilling is due to a combination of the heat that goes into melting and the entropic gain from diluting the water released by that melting into the liquid phase. In another comment he writes

Freezing point depression isn’t enough to explain why the drink gets colder than zero as you shake it. For instance, many oils have a very low freezing point but if you put an ice cube in them they will only go down to 0 degrees because there is no mixing.

I’m not crazy about this explanation, as it seems to me that it approaches Gibbs’ paradox territory with respect to the water released from the ice being diluted into the bulk vodka/tequila. Here, I think he’s seeing an effect of surface water around the ice cube limiting the temperature change. The observation about the melting points of oils does suggest a possible way to measure the internal temperature of ice cubes if one could suspend oil droplets in clear ice.

On the physical chemistry of ice cream and margaritas

When I was an undergrad at Stanford, senior biology majors were recruited to be TAs in the freshman biology class. I signed on for this and my first teaching experience at the undergraduate level involved attending lectures and leading discussion sections. In one lecture, the prof talked to the class about why salt is added to the ice in an old fashioned hand-cranked ice cream machine. He said, correctly, that the salt allowed the brine surrounding the freezing canister to get colder than 0°C, but that the mechanism was via the enthalpy of change solution of the salt dissolving in the water. I told my recitation section that the prof was wrong. The brine gets colder than 0°C due to freezing point depression, and that it was thanks to the ice starting at well below 0°C.

My questioning of his authority got back to the prof, who decided to add some material to a subsequent lecture to correct what he believed was his smart-aleck TA leading some of his freshmen astray. He pointed out that the enthalpy of solution for NaCl is +3.3 kJ/mol (or ~0.8 kcal/mol; we used kcal back then). I forget how much NaCl he thought was reasonable, but he came up with a back of the envelope calculation that disagreed with this

For each 58.44 grams (2.06 ounces) of salt that dissolves, 0.717 kilocalories (3 kilojoules) of heat is absorbed, meaning that dissolving salt causes the solution to become colder. The change is so slight you are unlikely to notice it in everyday life.

Fortunately for my prof, this was long before smart-ass students could use Google on their phones to find links to contradictory sources. And Wikipedia was far in the future. Saturated NaCl at 0°C is a 26% solution, which is ~4.45 M. So, starting with ice cold water with no ice you could drop the temperature to something on the order of 3 degrees. Which would eventually freeze the ice cream if you had massively excessive volume relative to ice cream, where you have to need a liquid to solid phase transition where the enthalpy of fusion on the order of 200 J/g (less than water, but still a lot relative to the heat of solution, according to this (pdf)).

So, in hindsight, I remain unconvinced by my old prof. I’ve often thought that one problem with intro bio textbooks is that they start with material that is taught more rigorously  in intro chem, by faculty who know the material better. I might have only been a freshman, but the elapsed time between when I had taken chemistry was decades shorter than it was for the ecologist teaching intro bio.

I was reminded of this experience earlier on Friday night, when I attended a very entertaining public lecture in the Physics Department, only peeking at the streaming video for the Stanford-S. Carolina Women’s basketball Final Four game on my phone a couple of times (Stanford lost, unfortunately… not enough enthalpy of shooting).

This was the event:

Now Harvard’s David Weitz is very different in background from a Stanford Ecologist/Intro Bio prof., and the elapsed time from my last physics/p. chem course is orders of magnitude longer than the last time he taught physics. Nevertheless, I think he made the same kind error as my old Intro Bio prof, and in fact I think his is worse in terms of the thermodynamics.

Toward the end of the lecture, he was using Peter Madden’s margarita preparation to illustrate temperature and phase transitions. In a shaker with ice and water, he asked the packed audience what they thought the temperatures were for the liquid and solid phases, i.e. the water and the ice. A young boy in the audience guessed that because they were in different phases, the ice was colder. He said something like: the ice is 31.999999 °F and the water is 32.000001 °F. Weitz said, no, they are both at 32 °F (or 0°C; there was a lot of shifting between C and F and I forget which). At that point, I leaned over to Debby and muttered that he was assuming that the system had reached thermal equilibrium, which was not knowable from the information provided.

OK, whatever… but then he had Madden pour out some tequila, which the measured as being at room temperature. They drained the water from the shaker, added the room temperature tequila, and shook it to mix. He then asked what people thought the temperature of the tequila would be. People guessed, they did the measurement, and lo and behold, it was significantly below 32°F/0°C.

What gave me a deja vu experience was his explanation of why the liquid phase was below the freezing point of pure water. We agree that the ethanol in the tequila is key. But unless I really misunderstood him both in real time and when I asked him about it afterward, he was arguing that the ethanol somehow allowed the liquid phase to lower the temperature of the solid phase! Which would require heat flow from the colder ice to the warmer tequila, looks to me like a flagrant violation of the Second Law of Thermodynamics.

This violation is only a problem if you think, as he insisted afterward, that the ice in the ice-water mix had reached thermal equilibrium and the solid phase starts uniformly at 0°C when the 20°C tequila is added. By contrast, if you agree with the kid in the audience that at least part the ice was colder than the final temperature of the liquid phase, there is no problem. The final temperature is just set by the freezing point depression from the ethanol and other solutes in the tequila.

The alternative hypothesis is that I’m misunderstanding what he said or missing something. This is plausible, because although it’s an appeal to authority argument, I think it’s a reasonable to think that a Harvard Physics Prof who specializes in phase transitions to have a higher probability of being right about this than me, a molecular biologist/annotation maven. Although I would estimate that the probability of me being right is still higher than the probability of getting a reservation at El Bulli before it closed.

But see the next post for an update!

Equation of the post:

ΔTF = KF · b · i,

  • ΔTF, the freezing-point depression, is TF (pure solvent) − TF (solution).
  • KF, the cryoscopic constant, which is dependent on the properties of the solvent
  • b is the molal concentration of the solute
  • i is the stoichiometry of the solute: the number of particles per molecule in solution. For ethanol this would be 1; for NaCl, it would be 2.

From Wikipedia.

GMO tech makes the Impossible Burger possible

… or at least economically practical.

Earlier this week I noticed a retweet of this event in my twitter feed:

Coincidentally, someone else posted this video about the Impossible Burger (which I hadn’t heard of before this week)

I always find stories about science and food to be interesting, and this even had a connection to my alma mater: the founder of Impossible Foods is Pat Brown of Stanford’s Biochemistry Department.

Via Google, I found some interesting things about the Impossible Burger. The video talks about their general approach of using analytical methods to figure out what constitutes the constellation of perceptions that we get when eating a particular food. But what this post is about is the secret ingredient: heme. When we talk about red meat, a lot of what makes it red is the iron in heme. I first learned about heme in any real detail at Stanford when I took intro biochemistry as an undergrad (back then the undergrads could take the same course as first year med students). Heme is found in myoglobin and hemoglobin, the major oxygen carrying proteins in muscle and blood. Heme is responsible for the “smoke ring” in BBQ. Heme is also found other proteins, and based on this story, it appears that Impossible Foods first tried to get heme from spinach chloroplasts. But I’m guessing that the yield was too low to scale up production, so they looked at another source.

Leghemoglobin is a heme protein that is made for nitrogen fixation. The Impossible Burger contains soy leghemoglobin, but it’s not actually made from soybeans, because the leghemoglobin is found in the root nodules, which are not normally harvested. Digging up the roots to get the leghemoglobin would negate some of the environmental benefits claimed by Impossible Foods, but it also is probably as economically inviable as getting heme from spinach leaves, if not worse. So to get the leghemoglobin, they cloned the soy protein into Pichia pastoris, a yeast used in biotech for protein overexpression. Here’s how Impossible Foods describes their ingredients:

The Impossible Burger is made from simple ingredients found in nature, including wheat, coconut oil and potatoes. We add one more special ingredient, called “heme.” Heme contributes to the characteristic color and taste of meat, and it catalyzes all the other flavors when meat is cooked. Heme is exceptionally abundant in animal muscle — and it’s a basic building block of life in all organisms, including plants. We discovered how to take heme from plants and produce it using fermentation — similar to the method that’s been used to make Belgian beer for nearly a thousand years. Adding heme to the Impossible Burger makes it a carnivore’s delight.

This struck me as kind of odd. Is there something special about Belgian beer fermentation that makes it more similar to Pichia protein production than normal beer fermentation? Belgian beer fermentation historically uses more wild yeast than others, but as far as I can tell from my reading, Pichia is not a desirable species in any beer fermentation, and the inoculum is going to be a pure culture, not the stuff falling off the cobwebs from a Trappist monastery.

The news coverage of the Impossible Burger has been pretty clear about the source of the heme. For example:

  • NPR:

    By taking the soybean gene that encodes the heme protein and transferring it to yeast, the company has been able to produce vast quantities of the bloodlike compound. Each vat of frothy red liquid in the lab holds enough heme to make about 20,000 quarter-pound Impossible Burgers. “We have to be able to produce this on a gigantic scale,” says Brown.

  • NYT

    Thanks to the addition of heme, an iron-rich molecule contained in blood (which the company produces in bulk using fermented yeast), it is designed to look, smell, sizzle and taste like a beef burger.

But what I don’t see is in either article is the three letter acronym with a lot of baggage: GMO. It’s understandable, but kind of a shame, IMO. Impossible Foods got applied to the FDA for their GMO-based heme to be Generally Regarded as Safe. Most scientists I know would agree with that for most, if not all, extant GMO foods. But if golden rice and virus-resistant plants for poor farmers aren’t enough to sway GMO fearmongers, vegan burgers for first-world foodies are unlikely to do much.

Speaking of GM burgers, it’s been 10 years since Nature Biotechnology published a report of GM cattle where the PRNP gene was knocked out. Will we ever see CJD-free meat in the butcher’s section? That one really is a previously impossible product made possible by GM technology.

Analogy-creep in hyping science

Via Instapundit by way of Popular Mechanics, I just saw this press release from UW-Madison hyping a new paper studying the host-virus interactome between humans and influenza.

In a comprehensive new study published today in the journal Cell Host and Microbe, the University of Wisconsin-Madison’s Yoshihiro Kawaoka and a team of researchers have set the stage for an entirely different approach. They have revealed methods for thwarting the hijackers by shutting down the cellular machinery they need, like cutting the fuel line on a bank robber’s getaway car.

When this got translated by Popular Mechanics we get the headline

Potential New Flu Treatment Would Starve the Virus, Limiting Resistance

which is the text blogged by Instapundit. This caught my eye because “starve the virus” is an odd claim since a virus is only metabolically active in the host, and the metabolites it uses are generally things the host needs too. From what I can tell from the abstract (apparently we don’t get on-campus Cell Host and Microbe here), the study is a large scale interactome to identify host proteins that coimmunoprecipitate with influenza proteins. Some of these were validated as affecting virus growth in culture by doing siRNA knockdowns. I’m not sure whether they then showed that known drug inhibitors also affected virus growth.

Here’s my guess about what happened:

  • The researcher told a UW PR person that the study catalogs host proteins that might be needed by influenza to propagate itself, and points out that resistance to drugs that target the host can’t easily arise in the virus.
  • The UW PR person tries to come up with something that is not part of a bank robber and comes up with a getaway car.
  • Continuing the analogy, the UW writer picks an essential part in the getaway car: the fuel line.
  • The Popular Mechanics headline writer saw “fuel” and thought the study was about reducing fuel for the virus
  • We get the headline suggesting that the study is about starving viruses.

Of course, if the virus is a bank robber, the host cell is not the getaway car; it’s the bank. Inhibiting virus infection with drugs that target host proteins is not like cutting the fuel line in the getaway car; it’s more like preventing bank robberies by killing bank tellers.  And it’s not just killing the tellers in the bank that’s being robbed, it’s killing all the tellers in all the banks in the community, whether they are being robbed or not.  Maybe that’s a reasonable strategy if the tellers are really nonessential in an age of ATMs. But that analogy is a lot less attractive.

The abstract mentions two potential “targets”, GBF1 and JAK1. I’m not sure how promising those are in terms of being therapeutic targets, based on the phenotypes of mouse knockouts.

Learning Artemis

For editing genome annotations, many of my colleagues use Artemis while others use Apollo. For my own use, I’ve usually just made scripts that generate GFF and visualized that in Gbrowse, Jbrowse, or IGV. For the genomics class I co-teach, we’ve had students edit GFF in a text editor (emacs!) and display it in IGV. But this year we shifted to doing more stuff that we used to do on the command line to our local teaching Galaxy, so after many years of avoiding them, I need to quickly get up to speed with Artemis and/or Apollo (in the long run, we’re going to use WebApollo, but that isn’t happening before the next homework assignment). Desktop Apollo stopped development and it’s not clear what the status of Artemis is, so this learning exercise may not be that useful.

To teach the kinds of things that MAKER does as a complete workflow, we are showing students how to take pieces of ab initio and data-driven evidence and assemble by hand the kind of evidence stack that MAKER automates. This means that we want to start with an undecorated fasta file of our artificial genome and load a bunch of gff, gtf, and bam files.

Everything below was done on a MacBook Air running OSX 10.9 (Mavericks).

Loading a fasta file

It seems like there are a couple of ways to do this. I was able to load my fasta file using either File > Read an entry or by invoking a project manager (which only seems to be available from the File menu if nothing else is opened). I initially opened a copy of my fasta file from a directory I had used with IGV, but found that this caused saves to fail because there was also a fasta index file present. Copying the file into my artemis working directory, I was able to open and save. This is what the viewer looks like.



The top line of the viewer shows a selector for feature sets, aka “Entries” in Artemis’ jargon. Below the entry bar (which can be hidden), the viewer shows an overview and a detailed view. Scroll bars on the right allow you to adjust the zoom of each; you can make the lower panel more of an overview than the top if you want. Double clicking on either panel jumps the other to the area you are viewing. A variety of graph options for things like GC content are available and open as additional panels. As you zoom out, Artemis shows stop codons in all 6 reading frames. As you zoom in, you get amino acid and DNA sequences.

Layers of annotations are “Entries”, so I can load additional files in different formats or create them using Artemis’ built-in tools. For example, Create > Mark Open Reading Frames gives this:ArtemisScreenSnapz002Several things have changed.

  • We have a new entry “ORFS_100+” (I used the default lower limit of 100 aa for ORF calling) on the entries bar.
  • The panels are now decorated with aqua blocks showing CDS features
  • The bottom panel shows a textual list of CDS features

More tracks/entries

I loaded a couple more entries as gff files:

  • Augustus gene prediction
  • Blastx parsed with a bioperl script I wrote


To get this view I tried some additional options from the Display menu. I tried Display > Show One Line Per Entry View. This is Display > Feature Stack View. These two create another panel above the overview genome panel.



There are some nice things about the display, but other parts are kind of a mess:

  • I like how the coding exons are linked across different reading frames
  • The parent-child feature relationships seem to be incomplete. CDS features are linked within a transcript, but parts of the same gene feature are displayed separately, and are stacked onto each other in a way that is hard to see.

Create a new set of annotations

Create > New Entry adds an entry to the entry bar called “no_name”. Yes, really. There’s no field to name the entry when you create it. You have to use Entries > Set Name of Entry and pick the no_name entry before you can rename it.

Features can be copied from the evidence entry sets to your custom entry and then edited. But I think I haven’t found the right way to copy a feature set (e.g. gene, transcript, introns, cds etc.) together.

That’s where I am so far… more later, perhaps.

More info

Artemis manual (ftp/pdf)

Artemis tutorials:



Ebola transmission

I did some reading on this topic a week ago, and this has been sitting in my drafts for about a week.

In the last post I noted that NEJM recently stated

Health care professionals treating patients with this illness have learned that transmission arises from contact with bodily fluids of a person who is symptomatic — that is, has a fever, vomiting, diarrhea, and malaise. We have very strong reason to believe that transmission occurs when the viral load in bodily fluids is high, on the order of millions of virions per microliter.

The question of whether patients are contagious before they become symptomatic has come up in debates about whether quarantine is appropriate or hysteria. The judge’s decision in the case of Mayhew v. Hickox, where a returning MSF nurse contested Maine’s State Dept of Health and Human Services quarantine request repeats this. Citing an expert from the states equivalent of the CDC, the judge wrote:

Individuals infected with Ebola Virus Disease who are not showing symptoms are not yet infectious.

But others are not as sure:

Moreover, said some public health specialists, there is no proof that a person infected — but who lacks symptoms — could not spread the virus to others.

“It’s really unclear,” said Michael Osterholm, a public health scientist at the University of Minnesota who recently served on the U.S. government’s National Science Advisory Board for Biosecurity. “None of us know.”

[Dr. Philip K] Russell, who oversaw the Army’s research on Ebola, said he found the epidemiological data unconvincing

What is the actual data? Not being an epidemiologist nor a virologist, I’m not already familiar with the literature, and I am likely to miss things and not fully understand the field-specific issues and language. But I think I can at least get a superficial sense of what’s out there, and what questions I would want to ask a real expert. Bottom line: the expert opinion that only the symptomatic are significantly contagious looks pretty good to me.

The first thing I noticed was that the literature on transmission of Ebola includes lots of computer modeling and that like most other fields, the citations for facts that are regarded as well established are often to reviews that cite other reviews. In some cases papers cite things like the CDC website, where the information lacks references. But this 1999 review seemed like a pretty good introduction and starting point. Authors CJ Peters and JW Peters from the CDC summarize the history of Ebola outbreaks, and point out the difficulty of reconstructing what happened in many of the early cases. Baron, McCormick and Zubeir looked at the spread of Ebola in a 1979 outbreak in the southern Sudan.

Every case,except that of the index patient,could be traced to a human source of infection…
Details of exposure to infection were not available for 2 secondary cases; the other 27 were associated with physical contact. Of these, 24 had provided nursing care to other patients in the family; for the remaining 3 patients (including the 2 children) the history indicated that the physical contact had been less intimate.

More importantly, the large numbers of family members who did not get Ebola suggested that the virus is not easily transmitted without direct contact with bodily fluids. Antibodies in asymptomatic family members (who had contact) suggested infactions that never turned into symptomatic cases. There were no cases where these were the source of another infection. But the numbers were relatively small.

In January of 1995, a charcoal worker who probably got Ebola from a natural reservoir was admitted to the Kikwit General Hospital. Retrospective analysis showed that he infected his family in the area of Kikwit, and some of the secondary and tertiary patients went to the Kikwit II Maternity Hospital over the following months. The official index patient of the outbreak was a 36 year old male who worked in the Kikwit II Maternity Hospital as a lab tech. The lab tech presented fever and intestinal symptoms that led to surgery for a suspected perforated bowel.

He underwent laparotomy at Kikwit General Hospital for a suspected perforated bowel after protracted fever. Postoperative abdominal distention increased, and an abdominal puncture revealed bloody peritoneal fluid. The patient underwent a new laparotomy, which showed massive intraabdominal hemorrhage. Three days later, on 14 April 1995, the patient died.

By that time, medical personnel who had cared for the index patient were getting sick. Only then was a viral hemorrhagic fever suspected. CDC confirmed that it was Ebola on May 10 after getting samples from Zaire the day before. Even before the confirmation, the government had declared an epidemic. By the end of the Kikwit outbreak, 316 people were known to have gotten Ebola, and 285 deaths were attributed to Ebola. This provided another opportunity to look at who gets Ebola and who doesn’t during an outbreak.

Dowell et al looked at risk factors for transmission of Ebola within families in the Kikwit outbreak. The results are overall in agreement with what was seen in Sudan.

The exposure that was most strongly predictive of risk for secondary transmission was direct physical contact with an ill family member, either at home in the early phase of illness or during the hospitalization. Of 95 family members who had such contact, 28 became infected, whereas none of 78 family members who did not touch an infected person during the period of clinical illness were infected (RR, undefined; P < .001). Nevertheless, the 78 family members who did not report direct physical contact with an ill person during the clinical phase of illness participated in a variety of activities that would have exposed them to fomite or airborne routes of spread. During the incubation period, all 78 shared meals with their ill family member, 26 reported direct physical contact, 15 shared their bedroom, and 6 shared their bed. In the early phase of illness, 62 slept in the same house and 42 shared meals. During the late phase of illness, 24 visited the hospital and 18 spoke with their ill family member.

Of the 316 patients, the majority had a known source of exposure, but 55 were initially unexplained. Roels et al went back and reexamined the available epidemiological information for 44 of these 55 (8 couldn’t be found and 3 refused to participate).

The probable source of exposure was identified for 32 (73%) of the 44 patients. Seventeen had visited an ill friend or relative with symptoms suggestive of EHF, 9 had been admitted to a health center in the 3 weeks preceding onset of EHF symptoms, and 6 had both risk factors. Of the 23 who had visited an ill friend or relative with symptoms suggestive of EHF, 4 (17%) resided in the same household as the ill patient and were their caregivers, 14 reported touching the ill patient, and 5 visited without touching the patient.

This leaves 12 people unaccounted for, and these 12 are sometimes cited as a problem for the conventional wisdom.

we identified an exposure source for 32 of 44 patients for whom no source was originally reported. Of the 12 patients who did not have an identified exposure source, no sociologic, occupational, or dietary risk factors for illness were found. Direct person-to-person contact was the likely mode of transmission for most EHF cases during this outbreak. However, our findings suggest that other EHF transmission modes cannot be excluded and may account for infection in those individuals for whom no previously recognized mode of transmission could be documented.

Although the alternative transmission cannot be formally eliminated, it is important to note that the 12 should also not be taken as proof of alternative transmission. In fact, none of them were actually confirmed as even having Ebola based on culturing the virus (See Table 3). There are also questions of the ability of the researchers to really reconstruct the contacts for each of these 12 people.

In the recent outbreak there are cases of health care workers who have contracted Ebola despite precautions. This could mean that there is a route of transmission that bypasses the protective protocols… or the simpler explanation is that errors in following the protocols led to transmission by the accepted route of direct contact with virus-laden bodily fluids. The Spanish nurse who has now recovered says she doesn’t know how she got it, but earlier reports talk about contact with gloves as she removed protective gear. For at least one of the nurses from Dallas, there are reports that she contacted Thomas Duncan in the ER without protective gear, before it was recognized that he was an Ebola patient.

NEJM on Ebola

The New England Journal of Medicine has an Editorial criticizing the quarantines in NJ and other states.

Health care professionals treating patients with this illness have learned that transmission arises from contact with bodily fluids of a person who is symptomatic — that is, has a fever, vomiting, diarrhea, and malaise. We have very strong reason to believe that transmission occurs when the viral load in bodily fluids is high, on the order of millions of virions per microliter. This recognition has led to the dictum that an asymptomatic person is not contagious; field experience in West Africa has shown that conclusion to be valid. Therefore, an asymptomatic health care worker returning from treating patients with Ebola, even if he or she were infected, would not be contagious.

In the same issue, there is an article: Clinical Illness and Outcomes in Patients with Ebola in Sierra Leone. Take a look at the supplementary material. Figure S8 shows temperature and heart rate for fatal and non fatal cases

Panels B, C,E and F represent cases with a normal temperature-pulse association.

All of these are infected. One of the three was a fatal. Table S5 shows symptoms. 11/36 did not present fever. We don’t see if they presented other symptoms but, from the legend:

Eight fatal subjects and one nonfatal subject showed no reported symptoms on the case notification form and were excluded from these results.


From Schieffelin et al. (2014) Clinical Illness and Outcomes in Patients with Ebola in Sierra Leone NEJM DOI: 10.1056/NEJMoa1411680 Figure S7

Perhaps people can die of Ebola without being viremic to the level needed for infect others? I also wonder if they really meant “millions per microliter” or “millions per milliliter”. The former is 109/milliliter. Figure S7 shows viral loads in fatal and nonfatal patients, and it does look like Ebola in serum can reach that level, but two of the fatal cases are well below that. Other sources have claimed that the number of particles needed for an infection is on the order of 1-10. Even at 106/ml,a microliter would be 1000 particles. When people are exposed to infected bodily fluids, are the volumes involved in the picoliter range?

Update: the ID50 is on the order of 1-10 for animal models, but in mice the sensitivity depends on the route of introduction.

The LD50 of mouse-adapted EBO-Z virus inoculated into the peritoneal cavity was ~1 virion. Mice were resistant to large doses of the same virus inoculated subcutaneously, intradermally, or intramuscularly.

Edge wander

In class today, we talked about the first Assemblethon paper. A student asked about the term “edge wander”, which comes from a paper by Ian Holmes and Richard Durbin. Figure 6 from the paper illustrates the basic idea.

Edge wander is a problem in multiple sequence alignments, and often scientists manually adjust alignments based on some heuristics that are not entirely clear to me. At last year’s Texas Protein Folding and Function Meeting, Patsy Babbitt mentioned in passing that manually adjusting multiple sequence alignments has become impractical as the number of available sequences in conserved protein families is exploding.