Friday, October 30, 2015

The Magnetic Field of a Single Axon (Part 1)

The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment (Biophysical Journal, 48:93–109, 1985)..
“The Magnetic Field of a Single Axon.”
Thirty years ago, John Wikswo and I published “The Magnetic Field of a Single Axon: A Comparison of Theory and Experiment” (Biophysical Journal, Volume 48, Pages 93–109, 1985). This was my second journal article (and my first as first author). Russ Hobbie and I cite it in Chapter 8 of the 5th edition of Intermediate Physics for Medicine and Biology. I reproduce the introduction below.
An active nerve axon can be modeled with sufficient accuracy to allow a detailed calculation of the associated magnetic field. Therefore the single axon provides a simple, yet fundamentally important system from which we can test our understanding of the relation between biomagnetic and bioelectric fields. The magnetic field produced by a propagating action potential has been calculated from the transmembrane action potential using the volume conductor model (1). The purpose of this paper is to verify that calculation experimentally. To make an accurate comparison between theory and experiment, we must be careful to correct for all systematic errors present in the data.

To test the volume conductor model it is necessary to measure the transmembrane potential and magnetic field simultaneously. An experiment performed by Wikswo et al. (2) provided preliminary data from a lobster axon, however the electric and magnetic signals were recorded at different positions along the axon and no quantitative comparisons were made between theory and experiment. In the experiment reported here, these limitations were overcome and improved instrumentation was used (3–5).
As the introduction notes, the volume conductor model was described in reference (1), which is an article by Jim Woosley, Wikswo and myself (“The Magnetic Field of a Single Axon: A Volume Conductor Model,” Mathematical Biosciences, Volume 76, Pages 1–36, 1985). I have discussed the calculation of the magnetic field previously in this blog, so today I’ll restrict myself to the experiment.

I was not the first to measure the magnetic field of a single axon. Wikswo’s student, J. C. Palmer, had made preliminary measurements using a lobster axon; reference (2) is to their earlier paper. One of the first tasks Wikswo gave me as a new graduate student was to reproduce and improve Palmer’s experiment, which meant I had to learn how to dissect and isolate a nerve. Lobsters were too expensive for me to practice with so I first dissected cheaper crayfish nerves; our plan was that once I had gotten good at crayfish we would switch to the larger lobster. I eventually became skilled enough in working with the crayfish nerve, and the data we obtained was good enough, that we never bothered with the lobsters.

I had to learn several techniques before I could perform the experiment. I recorded the transmembrane potential using a glass microelectrode. The electrode is made starting with a glass tube, about 1 mm in diameter. We had a commercial microelectrode puller, but it was an old design and had poor control over timing. So, one of my jobs was to design the timing circuitry (see here for more details). The glass would be warmed by a small wire heating element (much like you have in a toaster, but smaller), and once the glass was soft the machine would pull the two ends of the tube apart. The hot glass stretched and eventually broke, providing two glass tubes with long, tapering tips with a hole at the narrow end of about 1 micron diameter. I would then backfill these tubes with 2 Molar potassium citrate. The concentration was so high that when I occasionally forgot to clean up after an experiment I would comeback the next day and find the water had evaporated leaving impressive, large crystals. The back end of the glass tube would be put into a plexiglass holder that connected the conducting fluid to a silver-chloride electrode, and then to an amplifier.

One limitation of these measurements was the capacitance between the microelectrode and the perfusing bath. Because the magnetic measurements required that the nerve be completely immersed in saline, I could not reduce the stray capacitance by lowering the height of the bath. This capacitance severely reduced the rate of rise of the action potential, and to correct for it we used “negative capacitance.” We applied a square voltage pulse to the bath, and measured the microelectrode signal. We then adjusted the frequency compensation knob on the amplifier (basically, a differentiator) until the resulting microelectrode signal was a square pulse. That was the setting we used for measuring the action potential. Whenever I changed the position of the electrode or the depth of the bath, I had to recalibrate the negative capacitance.

To record the transmembrane potential, I would poke the axon (easy to see under a dissecting microscope) with a microelectrode. Often the tip of the electrode would not enter the axon, so I would tap on the lab bench creating a vibration that was just sufficient to drive the electrode through the membrane. Usually I had the output of the microelectrode amplifier go to a device that output current with a frequency that varied with the microelectrode voltage. I’d put this current through a speaker, so I could listen for when the microelectrode tip was successfully inside the axon because the DC potential would drop by about 70 mV (the axon’s resting potential) and therefore the pitch of the speaker would suddenly drop.

Next week I will continue this story, describing how we measured the magnetic field.

The transmembrane potential, measured with a glass microelectrode from a single axon.
The measured transmembrane potential.

Friday, October 23, 2015

Clearance and Semilog Plots

I occasionally like to write a new homework problem as a gift to the readers of Intermediate Physics for Medicine and Biology. Here is the latest, for Section 2.5 about clearance.
Problem 11 ½. A patient has been taking the drug digoxin for her atrial fibrillation. Her distribution volume is V = 400 l. At time t = 0 she stops taking the drug and her doctor measures her blood digoxin concentration, C, every 24 hours.

t (hr) C (ng/ml)
0 0.85
24 0.53
48 0.33
72 0.20
96 0.125
120 0.077

Calculate the clearance, K, in ml/min.
I like this problem because it reinforces two concepts at once: 1) clearance, and 2) using semilog plots. First, let’s analyze clearance. Equation 2.21 in IPMB gives the blood concentration as a function of time

C(t) = Co exp(−(K/V)t) .

If we can measure the rate of decay of the concentration, b, where the exponential factor is written as exp(−bt), we can calculate the clearance from K = bV. So, this problem really is about estimating b from the given data.

To calculate the rate b, we should plot the data on semilog graph paper. You can download this graph paper (for free!) at http://www.printablepaper.net/category/log, http://customgraph.com, or http://www.intmath.com/downloads/graph-paper.php. The figure below shows the data points (dots).

A plot of the concentration as a function of time. In this semilog plot, the decay appears as a straight line.
The concentration as a function of time.

Now, draw a line through the dots. Sophisticated mathematical techniques could be used to fit the best line through this data, but for this homework problem I suggest merely fitting a line by eye. For data with no noise, such as used here, you should be able to calculate b to within a few percent using a ruler, pen, and some care.

Next, our goal is to determine the decay constant b from the equation C = Co exp(−bt) using the method discussed in Section 2.3. Select two points on the line. They could be any two, but I suggest two widely spaced points. I’ll use the initial data point (t = 0, C = 0.85) and then estimate the time when the line has fallen by a factor of ten (C = 0.085). The vertical dashed line in the above figure indicates this time, which I estimate to be t = 115 ± 2 hr. I include the uncertainty, which reflects my opinion that I can estimate the time when the dashed line hits the time axis to slightly better than plus or minus one half of one of the small divisions shown on the paper, each of which is 24/5 = 4.8 hr wide. So I have two equations: 0.85 = Co and 0.085 = Co exp(−115b). When I divide the two equations, Co cancels out and I find 10 = exp(115b). Therefore, b = ln(10)/115 = 0.0200 ± 0.0003 hr−1. If I write the exponential as exp(−t/τ), then the time constant is τ = 1/b = 50.0 ± 0.9 hr. Often we say instead that the half-life is t1/2 = ln(2) τ = 34.7 ± 0.6 hr.

At this point, I suggest you inspect the plot and see if your result makes sense. Does the line appear to drop by half in one half-life? Using the plot, at t = 35 hr I estimate that C is about 0.42, which is just about half of 0.85, so it looks like I’m pretty close.

Now we can get the clearance from K = bV = (0.02 hr−1)(400 l) = 8 l/hr. The problem asks for units of ml/min (a common unit used in the medical literature), so (8 l/hr)(1 hr/60 min)(1000 ml/l) = 133 ml/min. The uncertainty in the clearance is probably determined by the uncertainty in the distribution volume, which we are not given but I’d guess is known to an accuracy of no better than 10%.

Astute readers might be thinking “This is a great homework problem, but I don’t understand why the distribution volume is so big; 400 l is much more than the volume of a person!” The distribution volume does not represent an actual volume of blood or of body fluid. Rather, it takes into account that most of the digoxin is stored in the tissue, with relatively little circulating in the blood. But since the blood concentration is what we measure, the distribution volume must be “inflated” to account for all the drug stored in muscle and other tissues. This is a common trick in pharmacokinetics.

I believe that analyzing data with semilog or log-log graph paper is one of those crucial skills that must be mastered by all science students. As an instructor, you cannot stress it enough. Hopefully this homework problem, and ones like it that you can invent, will reinforce this technique.

Friday, October 16, 2015

The Lewis Number

Last week in this blog I discussed why dolphins don’t breathe through gills like fish do. The take-home message was that using gills would cool the blood to the temperature of the surrounding water, and reheating the blood to the temperature of the dolphin’s body would require a prohibitive amount of energy.

You might be wondering: is there some tricky way that we can adjust things so that oxygen can diffuse without a significant heat transfer? Perhaps alter how long the blood is in thermal and diffusive contact with the seawater so there is time for oxygen diffusion but not time for thermal diffusion. Might that save the day?

You can compare the mechanisms of molecular and thermal heat transfer using the Lewis number, which is a ratio of the molecular diffusion constant and the thermal diffusion constant. Russ Hobbie and I discuss the Lewis number in Problem 20 of Chapter 4 of Intermediate Physics for Medicine and Biology. For oxygen diffusing in water, the diffusion constant is about 2 x 10−9 m2/s. The diffusion constant for heat is equal to the thermal conductivity divided by the product of the specific heat capacity and the density, which for water is about 1.5 x 10−7 m2/s. Thus, the diffusion constant for oxygen is about one hundred times less than the diffusion constant for heat. In other words, heat diffuses one hundred times more readily than oxygen, so it’s difficult to imagine how you could ever devise a situation where you could transfer oxygen without transferring heat too. As we concluded last week, physics constrains biology.

If you are exchanging heat and oxygen in air the situation is a bit better: for air the diffusion constant of oxygen and of heat are roughly the same. You can’t have oxygen diffusion without heat diffusion, but at least you aren’t down by a factor of one hundred.

The Lewis number is one of those useful dimensionless numbers—like the Reynolds number and the Peclet number—that summarizes the relative importance of two physical mechanisms. Because these numbers are dimensionless, their values does not depend on the system of units you use.

This all sounds fine and good, so imagine my surprise when one of my Biological Physics students working on this week’s homework assignment told me that the definition of the Lewis number in IPMB differs from the definition used by other sources. Yikes! The question comes down to this: is the Lewis number defined as the molecular diffusion constant over the thermal diffusion constant, or as the thermal diffusion constant over the molecular diffusion constant? In one sense it does not matter which definition you use. Either definition will tell you that in water oxygen has a harder time diffusing than heat. The only difference is that in one case the Lewis number is 1/100, and in the other case it is 100. The definition is arbitrary, like which direction you call right and which you call left. Had the first person to talk about those two directions called right left and left right, it would make no difference; they are just labels. However, I concede that if everyone uses different labels, confusion results. If half the people call left left and the other half call left right, then giving directions will be difficult—you would have to verify that you used the same definition of left and right before you could tell someone how to get across town.

I decided to check that fount of all knowledge: Wikipedia (how did I grow up without it?). There the definition is the opposite of that in IPMB—“Lewis number (Le) is a dimensionless number defined as the ratio of thermal diffusivity to mass diffusivity.” The free dictionary says “A dimensionless number used in studies of combined heat and mass transfer, equal to the thermal diffusivity divided by the diffusion coefficient” and thermopedia says the same, as does this publication. In the book Air and Water, Mark Denny uses our definition, molecules over heat (perhaps I should say we use Denny’s definition, because I am pretty sure we used Air and Water as our source). Interestingly, the CRC Handbook of Chemistry and Physics (I looked at the 59th edition, which is the one sitting in my office) says heat over molecules, but then adds “N.B.: Lewis number is sometimes defined as reciprocal of this quantity”). My conclusion is that the definition is a bit uncertain, but Russ and I (and Denny) appear to have adopted the minority view. What should I do? I’ve added to the IPMB errata the following entry:
Page 109: At the end of Problem 20, add the sentence “Warning: the Lewis number is sometimes defined as the reciprocal of the definition used here.”

Friday, October 9, 2015

Dolphins are not Sharks

A picture of Flipper, the dolphin who starred in its own television show when I was young. Dolphins are warm blooded, and must breath air rather than using gills to "breath" water.
Flipper.
I grew up watching the TV show Flipper, about a dophin. These curious creatures are mammals so they are warm blooded, but they have adapted in many ways to living in the sea. They have not, however, completely evolved into fish. For instance, they breathe air like we do rather than extracting oxygen from seawater using gills.

Russ Hobbie and I mention dolphins in the 5th edition of Intermediate Physics for Medicine and Biology, in a homework problem in Chapter 3.
Problem 50. Fish are cold blooded, and “breathe” water (in other words, they extract dissolved oxygen from the water around them using gills). Could a fish be warm blooded and still breathe water? Assume a warm-blooded fish maintains a body temperature that is 20 °C higher than the surrounding water. Furthermore, assume that the blood in the gills is cooled to the temperature of the surrounding water as the fish breathes water. Calculate the energy required to reheat 1 l of blood to the fish’s body temperature. One liter of blood carries sufficient oxygen to produce about 4000 J of metabolic energy. Is the energy needed to reheat 1 l of blood to body temperature greater than or less than the metabolic energy produced by 1 l of blood? What does this imply about warmblooded fish? Why must a warm-blooded aquatic mammal such as a dolphin breathe air, not water? Use c = 4200 J K−1 kg−1 and ρ = 103 kg m−3 for both the body and the surrounding water. For more on this topic, see Denny (1993).
The basic idea is that the gills would need to “process” a lot of seawater to raise the oxygen concentration in a small amount of blood. The seawater and blood have similar specific heats (that of water), so the heat capacity of the blood is much less than the heat capacity of the processed water. In other words, the surrounding seawater cools the blood to the temperature of the water, rather than the dolphin warming the seawater to its body temperature. This cold blood in the gills must then be warmed to the dolphin body temperature, which takes a lot of energy—much more than you would get by using the extracted oxygen for metabolism. You can’t win.

Air and Water: The Biology and Physics of Life's Media, by Mark Denny, superimposed on Intermediate Physics for Medicine and Biology.
Air and Water:
The Biology and Physics of Life's Media,
by Mark Denny.
The reference at the end of the homework problem is to the wonderful book Air and Water: The Biology and Physics of Life’s Media, by Mark Denny (Princeton University Press). Denny writes
Consider a hypothetical example. It could conceivably be advantageous for a warm-blooded animal such as a dolphin to breathe water instead of air. Such an adaption would remove the necessity for the animal to return periodically to the water’s surface, thereby increasing the time available in which to hunt food. However, if a 100 kg dolphin swimming in 7 °C water were to breathe water and still maintain a body temperature of 37 °C, it would expend energy at a rate of 3361 W just to heat its respiratory water. This is more than thirty times greater than its resting metabolic rate of 107 W! It suddenly becomes clear why marine mammals and birds continue to breathe air, and why water-breathing organisms (such as fish) are seldom much warmer than their watery surroundings.
A dolphin (a warm-blooded, air-breathing mammal) is very different from a shark (a cold-blooded, gill-breathing fish), even if they look similar.

Physics constrains biology. Evolution can do marvelous things, but it can’t violate the laws of physics.

Friday, October 2, 2015

Herman Carr, MRI pioneer

Nuclear Magnetic Resonance, Magnetic Resonance Spectroscopy, and Magnetic Resonance Imaging have resulted in Nobel Prizes to eight famous scientists.
  • Otto Stern, 1943, Physics, “for his contribution to the development of molecular ray method and his discovery of the magnetic moment of the proton.” 
  • Isidor Rabi, 1944, Physics, “for his resonance method for recording the magnetic properties of atomic nuclei.” 
  • Felix Bloch and Edward Purcell, 1952, Physics, “for their discovery of new methods for nuclear magnetic precision measurements and discoveries in connection therewith.” 
  • Richard Ernst, 1991, Chemistry, “for his contributions to the development of the methodology of high resolution nuclear magnetic resonance (NMR) spectroscopy.”
  • Kurt Wuthrich, 2002, Chemistry, “for his development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution.” 
  • Paul Lauterbur and Peter Mansfield, 2003, Physiology or Medicine, “for their discoveries concerning magnetic resonance imaging.”
Other pioneers are also well-known, such as Raymond Damadian and Erwin Hahn. Yet one crucial scientist who helped establish nuclear magnetic resonance is less know: Herman Carr.

Russ Hobbie and I mention Carr in Intermeidate Physics for Medicine and Biology in the context of a well-known MRI technique: the Carr-Purcell sequence (see Section 18.8). This sequence consists of a 90-degree radio-frequency magnetic pulse that tips the proton spins into the transverse plane, followed by a series of 180-degree RF pulses that form spin echos (I discussed spin echoes in this blog previously).

Herman Carr (1924-2008) grew up in Alliance, Ohio (about 60 miles east of the town where I attended my junior year of high school, Ashland, Ohio). He was a sergeant in the US army air corps during World War II, serving in Italy. He earned his physics PhD in 1953 from Harvard under the direction of Purcell, and spent most of his career at Rutgers University.

Carr is best known for his early work on magnetic resonance imaging. In this PhD thesis, he applied a magnetic field that varied with position and produced a one-dimensional image, thus introducing the use of magnetic field gradients for MRI. This idea was later developed by Paul Lauterbur. The gist of the method is that the magnetic field varies in space, and therefore the Larmor frequency of the proton spins varies in space. If you measure the magnetic resonance signal and separate it into different frequencies (Fourier analysis), each frequency component corresponds to the signal from a different location (see Section 18.9 of IPMB).

A controversy arose about the MRI Nobel prize to Lauterbur and Mansfield. Some claim that either Damadian (who did medical imaging without gradients) or Carr (who used gradients but did not do medical imaging), or both, should have shared in the prize. This is primarily a historical debate, about which I am not an expert. My impression is that while Lauterbur, Mansfield, Damadian and Carr all deserve credit for their work, the Nobel committee was not wrong in singling out the two winners.

Carr also made important contributions to using nuclear magnetic resonance to measure diffusion. Below is the abstract to Carr and Purcell’s article “Effects of Diffusion on Free Precession in Nuclear Magnetic Resonance Experiments” (Physical Review, Volume 94, Pages 630–638, 1954)
Nuclear resonance techniques involving free precession are examined, and, in particular, a convenient variation of Hahn's spin-echo method is described. This variation employs a combination of pulses of different intensity or duration (“90-degree” and “180-degree” pulses). Measurements of the transverse relaxation time T2 in fluids are often severely compromised by molecular diffusion. Hahn's analysis of the effect of diffusion is reformulated and extended, and a new scheme for measuring T2 is described which, as predicted by the extended theory, largely circumvents the diffusion effect. On the other hand, the free precession technique, applied in a different way, permits a direct measurement of the molecular self-diffusion constant in suitable fluids. A measurement of the self-diffusion constant of water at 25°C is described which yields D=2.5(±0.3)×10−5 cm2 /sec, in good agreement with previous determinations. An analysis of the effect of convection on free precession is also given. A null method for measuring the longitudinal relaxation time T1, based on the unequal-pulse technique, is described.
This paper was named a citation classic, and excerpts from Carr’s reminisces (written in 1983) of the paper are reproduced below.
In the fall of 1949 at Harvard University, I began reading about nuclear magnetic resonance (NMR) under the guidance of E. M. Purcell. In early November, Purcell read E. L. Hahn’s historic abstract about the fascinating phenomenon of ‘spin echoes.’ Purcell suggested that I try to understand this effect.

During Christmas recess I traveled to a student conference at the University of Illinois where ... I made a visit to the physics building where Hahn showed me his laboratory—a cramped hallway at the top of a high stairwell. There for the first time I saw spin echoes and learned about their discovery.

Hahn had explained his echoes using a model involving only equatorial components. Purcell suggested using a three-dimensional model, and this greatly simplified the understanding of the relatively complicated echoes associated with Hahn’s equal pulses. It was during lunch one day in the spring of 1950 that I realized the explanation could be simplified even more by using two unequal 90°and 180°pulses, and indeed a sequence consisting of a 90° pulse followed by a series of 180° pulses ... By the end of the summer of 1950, we had seen our own echoes at Harvard.

The 1954 paper—drafts of which were written in a cabin on a Cache Lake-island in Ontario’s Algonquin Park — included work done both at Harvard and using, in 1952- 1953, Henry Torrey’s excellent new magnet at Rutgers University. In addition ... the 1954 paper included an explanation of the effect of a 180°pulse in partially eliminating the artificial decay caused by diffusion in an inhomogeneous magnetic field ... The absolute value of the water self-diffusion coefficient D reported in the paper was measured at Rutgers using “anti- Helmholtz” coils to obtain the nearly uniform gradient ... To the best of my knowledge, this was the first use of intentionally applied gradients to obtain spatial information.

The extensive citation of this 1954 paper is undoubtedly due both to its very simple explanation of important basic phenomena, and to the exceedingly extensive—indeed, beyond all our expectations—applications of free precession techniques, especially when coupled with fast computer technology...
An obituary of Carr is given here.

Friday, September 25, 2015

Polonium-210, The Perfect Poison

Figure 17.27 in the 5th edition of Intermediate Physics for Medicine and Biology shows the decay series arising from the radioactive isotope radon-222, which itself is produced by the decay of the long-lived isotope uranium-238. The last step in this long chain of reactions is the alpha decay of polonium-210 to the stable isotope lead-206. The half-life of this decay is 138 days. This is not the only isotope of polonium in radon’s decay series. A heavier isotope polonium-214 has a half-life of 160 microseconds, and polonium-218 has a half-life of 3 minutes.

Polonium was discovered by Marie and Pierre Curie in 1898 when analyzing pitchblende, a uranium containing ore. It was named after Marie’s homeland, Poland. Now 210Po is produced by bombarding bismuth-209 with neutrons, forming bismuth-210, which undergoes beta decay to 210Po.

210Po is infamous for being a deadly poison. For a given mass, 210Po is 250,000 times more toxic than hydrogen cyanide. Its toxicity comes from the 5.3-MeV alpha particle it emits. Because alpha particles are easily stopped by clothing and even skin, 210Po is dangerous primarily when breathed or ingested, so that the alpha particles are emitted inside the body. A nearly pure alpha emitter, 210Po rarely emits a gamma ray, making it difficult to detect this poison unless one measures the alpha particles directly. A lethal dose comes from ingesting about a microgram.

210Po was used in the 2006 assassination of Alexander Litvinenko, a former Russian spy who was apparently given some polonium-laced tea by Russian agents (the investigation into this complicated murder continues--see here and here--and the details are still debated). Death by 210Po is slow; the 44-year old Litvinenko needed 22 days for the radiation to eventually take his life.

Polonium was also suspected to play a role in the 2004 death of Palestinian leader Yasser Arafat. Just this month, a French investigation has concluded that there is not enough evidence for pressing charges. The issue is complicated because 210Po is found in cigarette smoke, and Arafat was a heavy smoker. The National Council on Radiation Protection and Measurements reports that the annual effective dose equivalent to a smoker from radiation in tobacco is about 13 mSv, which is over four times the average annual dose of 3 mSv we are all exposed to (see Section 16.12 in IPMB), but is still a tiny dose.

The Environmental Protection Agency has published a report titled “Occurrence of 210Po and Biological Effects of Low-Level Exposure: The Need for Research.” As with all studies of low-level radiation exposure, the results are difficult to assess, and depend on our assumptions about radiation risks at small doses. But Alexander Litvinenko’s death proves that at high doses 210Po is very dangerous indeed; it’s perhaps the perfect poison.

Friday, September 18, 2015

Boltzmann’s Tomb

Asimov's Biographical Encyclopedia of Science and Technology, by Isaac Asiimov, superimposed on Intermediate Physics for Medicine and Biology.
Asimov’s Biographical Encyclopedia
of Science and Technology,
by Isaac Asimov.
In Chapter 3 of the 5th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Boltzmann factor and Boltzmann’s constant. Later in the book, we analyze the Poisson-Boltzmann equation and the Stefan-Boltzmann law. Who was Boltzmann? In Asimov’s Biographical Encyclopedia of Science and Technology (2nd revised edition), Isaac Asimov writes
BOLTZMANN, Ludwig Edward (bohlts’mahn)
Austrian physicist
Born: Vienna, February 20, 1844
Died: Duino, near Trieste (then in Austria, now in Italy), September 5, 1906

Boltzmann, the son of a civil servant, received his Ph.D. from the University of Vienna in 1866. His work on the kinetic theory of gases was done independently of Maxwell and they share the credit.

Beginning in 1871, Boltzmann increased the rigor of the mathematical treatment and emphasized the statistical interpretation of the second law of thermodynamics thus founding “statistical mechanics.” He showed that Clausius’ concept of increasing entropy of disorder [could be based on statistical ideas], laying the groundwork for the later achievements of Gibbs.

He was a firm proponent of atomism at a time when Ostwald was mounting the final campaign against it. Boltzmann also advanced a mathematical treatment that explained the manner in which, according to the experimental observations of Stefan (whom Boltzmann, in this college years, served as assistant), quantity of radiation increased as the fourth power of the temperature. This is therefore sometimes called the Stefan-Boltzmann law.

Boltzmann turned down a chance to succeed Kirchhoff at Berlin but in 1894 succeeded to Stefan’s post in Vienna.

Though Boltzmann lived longer than Maxwell, his life too was cut short. In his case it was suicide, brought on by recurrent episodes of severe mental depression accentuated, perhaps, by opposition to his atomistic notions by Oswald and others.

His equation relating entropy and disorder was engraved on the headstone of his grave.
I particularly am intrigued by the last sentence of Asimov’s entry. Who puts an equation on their tombstone? Boltzmann did!
A photograph of Boltzmann's tombstone, with the equation S = k log W on it.
Boltzmann's tombstone.
This equation is Eq. 3.20 in IPMB.

S = kB ln Ω ,

where S is the entropy, kB is Boltzmann’s constant, ln is the natural logarithm, and Ω is the number of microstates. The equation says that the entropy increases as the number of possible microstates increases. If there are only one or a few states available, the entropy is small; if there are many states available, the entropy is large. Thus, from a statistical mechanics point-of-view, the thermodynamic concept of entropy (developed well before Boltzmann’s work) is a measure of the number of states. The logarithm is important, because if system A has 10 states available and system B has 20 states available, the total number of states is the product, 200. If the entropy were proportional to Ω, the total entropy of the two systems would not be the sum of the entropy in each system. However, the logarithm property ln(ΩAΩB)=ln(ΩA)+ln(ΩB) ensures that the entropy is indeed additive.

The definition of entropy in terms of the number of states is a fundamental relationship connecting thermodynamics and statistical mechanics. No wonder Boltzmann wanted it on his tombstone.

Friday, September 11, 2015

Meselson, Stahl, and the Most Beautiful Experiment in Biology

In Chapter 17 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I added a new homework problem to the 5th edition about the famous Hershey-Chase experiment. We had two goals: to demonstrate how scientists use radioisotopes as tracers in biological experiments, and to describe a key experiment in modern biology.

A series of homework problems in Chapter 1 of IPMB describe the physics of the ultracentrifuge. Perhaps Russ and I should add a new homework problem in Chapter 1, to demonstrate how the centrifuge provides crucial information about biological mechanisms, and to describe another famous biological experiment. Here is my try at this new problem.
Problem 24 1/2. Suppose you grow E. coli bacteria in a growth medium containing the rare, heavy but stable isotope of nitrogen, N15. At some time t = 0 remove some of the E. coli from this medium and place it into another growth medium containing the normal isotope of nitrogen, N14. Then, at different times place DNA from the E. coli into a density gradient centrifuge (see Problem 23) and measure where along the gradient the DNA settles.
a) Describe qualitatively what you would expect to see at t = 0, before any of the E. coli reproduce.
b) Assume DNA replicates semiconservatively: replication produces two new DNA molecules, each containing two strands: one a strand from the original DNA molecule and another new strand produced from the medium. Describe what you would expect to see at t = t1, where t1 is the time required to produce one new generation of E. coli. Describe what you would expect to see at t = 2 t1
c) Repeat part b) assuming DNA replicates conservatively: each replication produces two DNA molecules, one containing the original two strands and the other containing two new strands. 
d) Repeat part b) assuming DNA replicates dispersively: each replication produces two new DNA molecules, both containing a mix of the original and new DNA. 
This experiment was performed by Meselson and Stahl in 1958, and is one of the central experiments underlying modern biology. It demonstrates the semiconservative replication of DNA.
The Eighth Day of Creation: The Makers of the Revolution in Biology, by Horace Freeland Judson, superimposed on Intermediate Physics for Medicine and Biology.
The Eighth Day of Creation,
by Horace Freeland Judson.
If you want to learn more about the Meselson-Stahl experiment, I suggest reading Chapter 3 of The Eighth Day of Creation: The Makers of the Revolution in Biology, by Horace Freeland Judson. Below I provide an excerpt.
I first heard of semiconservative replication on New Year’s Day, 1958, in Chicago—and a bright, windy, iron-cold morning it was. Seven of us who had been undergraduates together at the University of Chicago (we had all overlapped Watson’s last year there) were sitting scratchy-eyed over bacon and eggs and coffee when Matthew Meselson, by then a doctoral candidate with Pauling at Cal Tech, took a photograph from his wallet and passed it around the table. The picture showed a stack of gray stripes, with narrow, dark-gray bands across them—some stripes with one band, some with two or three together near the middle. The photo was the main result of an experiment that Menelson had devised with a post doctoral fellow at Cal Tech, Franklin Stahl.

Their paper was not yet published—not yet written. The work it describes is now recognized as displaying the most rare technical skill, while conceptually its confirmation of the way DNA reproduces itself has become, simply, part of the mainstream. In its place towards the end of the history of the elucidation of the structure and function of DNA, Meselson’s and Stahl’s paper possessed an importance and authority like Oswald Avery’s announcement, fourteen years earlier, of the isolation of the transforming principle and its identification as DNA. “Classic” was Watson’s epithet for Meselson’s and Stahl’s paper. Watson’s predecessor as director of the Cold Spring Harbor Laboratory, John Cairns, startled me in conversation when he described Meselson’s central demonstration without qualification as “the most beautiful experiment in biology.”

Friday, September 4, 2015

Learning Biology

Suppose you are a physicist, mathematician, or engineer who wants to change your research direction toward biology and medicine. How do you learn biology? Let’s assume you don’t quit your day job, so you have limited time. Here are my suggestions.
  1. Machinery of Life, by David Goodsell, superimposed on Intermediate Physics for Medicine and Biology.
    Machinery of Life,
    by David Goodsell.
    Read The Machinery of Life (2nd edition), by David Goodsell. I discussed this book a few weeks ago in this blog. It’s visual, easy to read, not too long, cheap, and doesn’t get bogged down in details. It’s a great introduction; this is where I would start.
  2. If you haven’t had an introductory biology class, you might consider taking this online biology class from MIT. It’s free, it has homework assignments and quizzes so you can assess your learning, and you can work at your own schedule. For those who prefer an online class to reading a book, this is the thing to do.
  3. If you would prefer reading an introductory biology textbook, a popular one is Campbell Biology by Reece et al., now in its 10th edition. The MIT online course mentioned above and the introductory biology classes here at Oakland University use this book. Its advantages are that it covers all of biology and it is written for introductory students. Its disadvantages are that it is expensive and long. I am not an expert on the different intro biology textbooks; there may be others just as good.
  4. The Eighth Day of Creation,  by Horace Freeland Judson, superimposed on Intermediate Physics for Medicine and Biology.
    The Eighth Day of Creation,
    by Horace Freeland Judson.
    I like to learn a subject by studying its history. If you want to try this, I suggest: The Double Helix by James Watson (of Watson and Crick) and The Eighth Day of Creation by Horace Freeland Judson. Watson’s book is a classic: a first-person account the discovery of the structure of DNA. It is well written, controversial, and should be read by anyone interested in science. Judson’s book is longer and more comprehensive; a fantastic book.
  5. The textbook Physical Biology of the Cell by Phillips et al. was written by physicists trying to learn biology. Also from a physicist’s point of view are Biological Physics and Physical Models of Living Systems, both by Philip Nelson. These books don’t cover all of biology, but a physicist may like them.
  6. I learned a lot of biology in high school reading Isaac Asimov books. They often take a historical approach, and are qualitative, interesting, clearly written, fairly short, and cheap. I worry about recommending them because biology has progressed so much over the last few decades that these books from the 1960s are out-of-date. However, I suspect they are still useful introductions, and I suggest The Wellsprings of Life, The Genetic Code, The Human Body, The Human Brain, and A Short History of Biology.
  7. Some books from my ideal bookshelf cover parts of biology from the point of view of a physicist: Air and Water by Mark Denny, Scaling: Why is Animal Size so Important? by Knut Schmidt-Nielsen, and Random Walks in Biology by Howard Berg. Steven Vogel has many books you might like, including Life in Moving Fluids, Vital Circuits, and Life’s Devices
  8. Nothing in biology makes sense except in light of evolution. To learn about evolution, read the books of Stephen Jay Gould. I enjoyed his collections of essays from the magazine Natural History. Start with Ever Since Darwin.
  9. Textbook of Medical Physiology, by Guyton and Hall, superimposed on Intermediate Physics for Medicine and Biology.
    Textbook of Medical Physiology,
    by Guyton and Hall.
  10. Once you have a general biology background, what comes next? When I was in graduate school, I sat in on the Vanderbilt Medical School’s Physiology class and their Biochemistry class. These are the two courses that I encourage Oakland University Medical Physics graduate students to take. Typical textbooks are Guyton and Hall’s Textbook of Medical Physiology, now in its 13th edition, and Nelson and Cox's Lehninger Principles of Biochemistry, now in its 6th edition. Both books are long, expensive, and detailed. If interested in cell and molecular biology, a leading text is Molecular Biology of the Cell by Bruce Alberts and Alexander Johnson. 
  11. If you have the time, you can do what Russ Hobbie did: between 1971 and 1973 he audited all the courses medical students take in their first two years at the University of Minnesota. Finally, you can always purchase a copy of the 5th edition of Intermediate Physics for Medicine and Biology!
If readers of the blog have their own recommendations, please add them in the comments.

Friday, August 28, 2015

Art Winfree and the Bidomain Model of Cardiac Tissue

Art Winfree was a pioneer in applying physics and mathematics to cardiac electrophysiology. Russ Hobbie and I cite him often in the 5th edition of Intermediate Physics for Medicine and Biology. After his untimely death in 2002, I was asked to write an article for a special issue of the Journal of Theoretical Biology published in his honor. My paper, “Art Winfree and the Bidomain Model of Cardiac Tissue,” appeared in 2004.

My original submission for the special issue was a personal tribute to Art. It began
“Spiral waves have become so popular in Tucson they are even sold in hair styling salons (Figure 1)”
A photograph in a preprint from Art Winfree, with the caption "Spiral waves have become so popular in Tucson they are even sold in hair styling salons (Figure 1)"
Figure 1.
I had to laugh as I read the above quote in a preprint Art Winfree sent me. It was to be the opening sentence of a chapter appearing in a prestigious textbook on cardiac electrophysiology. Unfortunately, the sentence and the picture were deleted before the book's publication, although the picture (Fig. 1) did appear eventually in the second edition of Art’s The Geometry of Biological Time. For me, the quote captures the essence of Art: his humor, his irreverence, and his uncanny ability to find science in the world around him. I only met Art in person once, but we corresponded often by email, exchanging ideas, frustrations, and gossip. Of all the scientists who have influenced my research career, only my PhD advisor John Wikswo had a greater impact than Art Winfree did. In this paper, I describe several instances where my path crossed Art’s as we each attacked related problems in cardiac electrophysiology. In addition, I hope to show that Art made important contributions to what is known as the “bidomain model” of cardiac tissue.
Later in the article is one of my favorite passages.
I recall vividly a sunny day in April, soon after my second daughter Katherine was born. I was sitting on a rocking chair in the living room of our house in Kensington, Maryland, holding the sleeping infant in one arm and Art’s book When Time Breaks Down in the other. Outside I could see our dogwood tree in full blossom. As I read page after page, I remember thinking “life doesn’t get any better than this.” The book (and the daughter) changed my life.
Unfortunately, the editors of the special issue didn’t like my paper, saying they wanted a more traditional review article. In particular, they objected to my quoting Art’s emails he had sent me. So, I gave the paper a lobotomy and published a harmless but lifeless review. When the issue came out, I found a wonderful article by George Oster about Winfree, full of personal insights and even the text of one of Art’s emails. I wish now I had pushed harder to get my article published in its original form. The best article in the special issue was “Art Winfree, Artist of Science” by his daughter Rachael Winfree.

In the acknowledgments of my paper is the line “I would like to thank Jesse Malouf for his help editing this paper.” Jesse was a student in my honors college course about Pacemakers and Defibrillators. At Oakland University, Honors College has many of the best students in the university, but they are from all backgrounds and often have weak math skills. Jesse was a mathaphobe, but a wonderful writer. On one of my exams I had a mixture of questions, some requiring mathematical analysis and others needing an essay. Jesse skipped the math questions, but to make up for it he not only answered all the essay questions elegantly but also wrote a “bonus essay”. I never had a student hand in a bonus essay before! The next semester, I hired him to help me write the Winfree article. I fear many of his contributions to the original version were not included in the published one.

In the “olden days” the original draft of my Winfree article would be lost forever, or maybe would sit in some file cabinet unseen for decades. But nowadays, you can find anything on the internet (how did we live without it?). I have posted the original submission on my ResearchGate page. You can find it here.