Friday, July 29, 2016

Niels Bohr and the Stopping Power of Alpha Particles

In Chapter 15 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the interaction of charged particles with electrons.
15.11.1 Interaction with Target Electrons

We first consider the interaction of the projectile with a target electron, which leads to the electronic stopping power, Se. Many authors call it the collision stopping power, Scol. There can be interactions in which a single electron is ejected from a target atom or interactions with the electron cloud as a whole (a plasmon excitation). The stopping power at higher energies, where it is nearly proportional to β−2 [β = v/c, where v is the speed of the projectile and c is the speed of light], has been modeled by Bohr, by Bethe, and by Bloch (see the review by Ahlen 1980)
Bohr is, of course, the famous Niels Bohr, one of the greatest physicists of all time. I am familiar with Bohr’s model of the hydrogen atom (see Sec. 14.3), but not as much with his work on the stopping power of charged particles. It turns out that Bohr’s groundbreaking work on hydrogen grew out of his study of the stopping power of alpha particles. Moreover, the stopping power analysis was motivated by Ernest Rutherford’s experiments on the scattering of alpha particles, which established the nuclear structure of the atom. This chain of events began with the young Niels Bohr arriving in Manchester to work with Rutherford in March 1912. Abraham Pais discusses this part of Bohr’s life in his biography Niels Bohr’s Times: In Physics, Philosophy, and Polity.
Bohr finished his paper on this subject [the energy loss of alpha particles when traversing matter] only after he had left Manchester; it appeared in 1913. The problem of the stopping of electrically charged particles remained one of his lifelong interests. In 1915 he completed another paper on that subject, which includes the influence of effects due to relativity and to straggling (that is, the fluctuations in energy and in range of individual particles)…

Bohr’s 1913 paper on α-particles, which he had begun in Manchester, and which had led him to the question of atomic structure, marks the transition to his great work, also of 1913, on that same problem. While still in Manchester, he had already begun an early sketch of these entirely new ideas. The first intimation of this comes from a letter, from Manchester, to Harald [Niels’ brother]: ‘Perhaps I have found out a little about the structure of atoms. Don’t talk about it to anybody…It has grown out of a little information I got from the absorption of α-rays.’ I leave the discussion of these beginnings to the next chapter.
On 24 July 1912 Bohr left Manchester for his beloved Denmark. His postdoctoral period had come to an end.
So the alpha particle stopping power calculation Russ and I discuss in Chapter 15 led directly to Bohr's model of the hydrogen atom, for which he got the Nobel Prize in 1922.

Friday, July 22, 2016

Error Rates During DNA Copying

Chapter 3 of Intermediate Physics for Medicine and Biology discusses the Boltzmann factor. In the homework exercises at the end of the chapter, we include a problem in which you apply the Boltzmann factor to estimate the error rate during the copying of DNA.
Problem 30. The DNA molecule consists of two intertwined linear chains. Sticking out from each monomer (link in the chain) is one of four bases: adenine (A), guanine (G), thymine (T), or cytosine (C). In the double helix, each base from one strand bonds to a base in the other strand. The correct matches, A-T and G-C, are more tightly bound than are the improper matches. The chain looks something like this, where the last bond shown is an “error.”
The probability of an error at 300 K is about 10-9 per base pair. Assume that this probability is determined by a Boltzmann factor e-U/kBT, where U is the additional energy required for a mismatch.
(a) Estimate this excess energy.
(b) If such mismatches are the sole cause of mutations in an organism, what would the mutation rate be if the temperature were raised 20° C?
This is a nice simple homework problem that provides practice with the Boltzmann factor and insight into the thermodynamics of base pair copying. Unfortunately, reality is more complicated.

William Bialek addresses the problem of DNA copying in his book Biophysics: Searching for Principles (Princeton University Press, 2012). He notes that the A typically binds to T. If A were to bind with G, the resulting base pair would be the wrong size and grossly disrupt the DNA double helix (A and G are both large double-ring molecules). However, if A were to bind incorrectly with C, the result would fit okay (C and T are about the same size) at the cost of eliminating one or two hydrogen bonds, which have a total energy of about 10 kBT. Bialek writes
An energy difference of ΔF ~ 10 kBT means that the probability of an incorrect base pairing should be, according to the Boltzmann distribution, e-ΔF/kBT ~ 10-4. A typical protein is 300 amino acids long, which means that it is encoded by about 1000 bases; if the error probability is 10-4, then replication of DNA would introduce roughly one mutation in every tenth protein. For humans, with a billion base pairs in the genome, every child would be born with hundreds of thousands of bases different from his or her parents. If these predicted error rates seem large, they are—real error rates in DNA replication vary across organisms [see the vignette “what is the error rate in transcription and translation” in Cell Biology by the Numbers], but are in the range of 10-8—10-12, so the entire genome can be copied without almost any mistakes.
So, how is the does the error rate become so small? There are enzymes called DNA polymerases that proofread the copied DNA and correct most errors. Because of these enzymes, the overall error rate is far smaller than the 10-4 rate you would estimate from the Boltzmann factor alone.

Our homework problem is therefore a little misleading, but it has redeeming virtues. First, the error we show in the figure is G-A, which would more severely disrupt the DNA's double helix structure. That specific error may well have a higher energy and therefore a lower error rate from the Boltzmann factor alone. Second, the problem illustrates how sensitive the Boltzmann factor is to small changes in energy. If ΔE = 10 kBT, the Boltzmann factor is e-10 = 0.5 × 10-4. If ΔE = 20 kBT, the Boltzmann factor is e-20 = 2 × 10-9. A factor of two increase in energy translates into more than a factor of 10,000 reduction in error rate. Wow!

Friday, July 15, 2016

Word Clouds

I have always wondered about those funny-looking collections of different-sized, different-colored words: the word cloud. This week I learned how to create a word cloud from any text I choose using the free online software at Of course, I chose Intermediate Physics for Medicine and Biology. Here is what I got:

The word cloud speaks for itself, but let me add a few comments. First, I deleted the preface, the table of contents, and the index from a pdf copy of IPMB before submitting it. The software was having trouble with such a large input file, and reducing the size seemed to help. After the list of words and their frequencies was created, I edited it. The software is smart enough to not include common words like "the" and "is", but I deleted others that seemed generic to me, like "consider" and "therefore." I kept words that appeared at least 250 times, which was about 65 words. The most common word was "Fig", as in "...spherical air sacs called alveoli (Fig. 1.1b)." The third most common was "Problem" as in "Problem 1. Estimate the number of....". I considered removing these, but illustrations and end-of-chapter exercises are an important part of the book, so they stayed. I was surprised by the second most common word: "energy". Russ Hobbie and I did not set out to make this a unifying theme in the book, but apparently it is.

I will let you decide if this word cloud is profound or silly. It was fun, and I like to share fun things with the readers of IPMB. Enjoy!

Friday, July 8, 2016

Cell Biology by the Numbers

Six years ago I wrote an entry in this blog about the bionumbers website. Now Ron Milo and Rob Phillips have turned that website into a book: Cell Biology by the Numbers. Milo and Phillips write
One of the central missions of our book is to serve as an entry point that invites the reader to explore some of the key numbers of cell biology. We hope to attract readers of all kinds—from seasoned researchers, who simply want to find the best values for some number of interest, to beginning biology students, who want to supplement their introductory course materials. In the pages that follow, we provide a broad collection of vignettes, each of which focuses on quantities that help us think about sizes, concentrations, energies, rates, information content, and other key quantities that describe the living world.
One part of the book that readers of Intermediate Physics for Medicine and Biology might find useful is their “rules of thumb”. I reproduce a few of them here
• 1 dalton (Da) = 1 g/mol ~ 1.6 × 10-24 g.
• 1 nM is about 1 molecule per bacterial volume [E. coli has a volume of about 1 μm3].
• 1 M is about one per 1 nm3.
• Under standard conditions, particles at a concentration of 1 M are ~ 1 nm apart.
• Water molecule volume ~ 0.03 nm3, (~0.3 nm)3.
• A small metabolite diffuses 1 nm in ~1 ns.
The book consists of a series of vignettes, each phrased as a question. Here is an excerpt form one.
Which is bigger, mRNA or the protein it codes for?

The role of messenger RNA molecules (mRNAs), as epitomized in the central dogma, is one of fleeting messages for the creation of the main movers and shakers of the cell—namely, the proteins that drive cellular life. Words like these can conjure a mental picture in which an mRNA is thought of as a small blueprint for the creation of a much larger protein machine. In reality, the scales are exactly the opposite of what most people would guess. Nucleotides, the monomers making up an RNA molecule, have a mass of about 330 Da. This is about three times heavier that the average amino acid mass, which weighs in at ~110 Da. Moreover, since it takes three nucleotides to code for a single amino acid, this implies an extra factor of three in favor of mRNA such that the mRNA coding a given protein will be almost an order of magnitude heavier.
It’s obvious once someone explains it to you. Here is another that I liked.
What is the pH of a cell?

…Even though hydrogen ions appear to be ubiquitous in the exercise sections of textbooks, their actual abundance inside cells is extremely small. To see this, consider how many ions are in a bacterium or mitochondrion of volume 1 μm3 at pH 7. Using the rule of thumb that 1 nM corresponds to ~ 1 molecule per bacterial cell volume, and recognizing that pH 7 corresponds to a concentration of 10-7 M (or 100 nM), this means that there are about 100 hydrogen ions per bacterial cell…This should be contrasted with the fact that there are in excess of a million proteins in that same cellular volume.
This one surprised me.
What are the concentrations of free matabolites in cells?

…The molecular census of metabolites in E. coli reveals some overwhelmingly dominant molecular players. The amino acid glutamate wins out…at about 100 mM, which is higher than all other amino acids combined…Glutamate is negatively charged, as are most of the other abundant metabolites in the cell. This stockpile of negative charges is balanced mostly by a corresponding positively changed stockpile of free potassium ions, which have a typical concentration of roughly 200 mM.
Somehow, I never realized how much glutamate is in cells. I also learned all sorts of interesting facts. For instance, a 5% by weight mixture of alcohol in water (roughly equivalent to beer) corresponds to a 1 M concentration. I guess the reason this does not wreak havoc on your osmotic balance is that alcohol easily crosses the cell membrane. Apparently yeast use the alcohol they produce to inhibit the growth of bacteria. This must be why John Snow found that during the 1854 London cholera epidemic, the guys working (and, apparently, drinking) in the brewery were immune.

I’ll give you one more example. Milo and Phillips analyze how long it will take a substrate to collide with a protein.
…Say we drop a test substrate molecule into a cytoplasm with a volume equal to that of a bacterial cell. If everything is well mixed and there is no binding, how long will it take for the substrate molecule to collide with one specific protein in the cell? The rate of enzyme substrate collisions is dictated by the diffusion limit, which as shown above, is equal to ~ 109 s-1M-1 times the concentration. We make use of one of our tricks of the trade, which states that in E. coli, a single molecule (say, our substrate) has an effective concentration of about 1 nM (that is, 10-9 M). The rate of collisions is thus 109 s-1M-1 × 10-9 M. That is, they will meet within a second on average. This allows us to estimate that every substrate molecule collides with each and every protein in the cell on average about once per second.
Each and every one, once per second! The beauty of this book, and the value of making these order-of-magnitude estimates, is to provide such insight. I cannot think of any book that has provided me with more insight than Cell Biology by the Numbers.

Readers of IPMB will enjoy CBbtN. It is well written and the illustrations by Nigel Orme are lovely. It may have more cell biology than readers of IPMB are used to (Russ Hobbie and I are macroscopic guys), but that is fine. For those who prefer video over text, listen to Rob Phillips and Ron Milo give their views of life in the videos below.

I will give Milo and Phillips the last word, which could also sum up our goals for IPMB.
We leave our readers with the hope that they will find these and other questions inspiring and will set off on their own path to biological numeracy. 

Friday, July 1, 2016

The Wien Exponential Law

In Section 14.8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss blackbody radiation. Our analysis is similar to that in many modern physics textbooks. We introduce Planck’s law for Wλ(λ,T) dλ, the spectrum of power per unit area emitted by a completely black surface at temperature T and wavelength λ
where c is the speed of light, h is Planck’s constant, and kB is Boltzmann’s constant. We then 1) express this function in terms of frequency ν instead of wavelength λ, 2) integrate over all wavelengths to derive the Stefan-Boltzmann law, and 3) show that the wavelength of peak emission decreases with temperature, often known as the Wien displacement law.

Russ and I like to provide homework problems that reinforce the concepts in the text. Ideally, the problem requires the reader to repeat many of the same steps carried out in the book, but for a slightly different case or in a somewhat different context. Below I present such a homework problem for blackbody radiation. It is based on an approximation to Planck’s law at short wavelengths derived by Wilhelm Wien.
Problem 25 ½. Consider the limit of Planck’s law, Eq. 14.33, when hc/λ is much greater than kBT, an approximation known as the Wien exponential law.
(a) Derive the mathematical form of Wλ(λ,T) in this limit.
(b) Convert Wien’s law from a function of wavelength to a function of frequency, and determine the mathematical form of Wν(ν,T).
(c) Integrate Wν(ν,T) over all frequencies to obtain the total power emitted per unit area. Compare this result to the Stefan-Boltzmann law (Eq. 14.34). Derive an expression for the Stefan-Boltzmann constant in terms of other fundamental constants.
(d) Determine the frequency νmax corresponding to the peak in Wν(ν,T). Compare νmax/T to the value obtained from Planck’s law.
The Wien exponential law predated Planck’s law by several years. In his landmark biography ‘Subtle is the Lord…’: The Science and the Life of Albert Einstein, Abraham Pais discusses 19th century attempts to describe blackbody radiation theoretically.
“Meanwhile,proposals for the correct form of [Wλ(λ,T)] had begun to appear as early as the 1860s. All these guesses may be forgotten except one, Wien’s exponential law, proposed in 1896…

Experimental techniques had sufficiently advanced by then to put this formula to the test. This was done by Friedrich Paschen from Hannover, whose measurements (very good ones) were made in the near infrared, λ = 1-8 μm (and T = 400 -1600 K). He published his data in January 1897. His conclusion: ‘It would seem very difficult to find another function…that represents the data with as few constants.’ For a brief period, it appeared that Wien’s law was the final answer. But then, in the year 1900, this conclusion turned out to be premature…”
And the rest, as they say, is history.