Friday, June 6, 2014

Plant Physics

Perhaps the 4th edition of Intermediate Physics for Medicine and Biology should have a different title. It really should be Intermediate Physics for Medicine and Zoology. Russ Hobbie and I talk a lot about the physics of animals, but not much about plants. There is little botany in our book. This is not completely true. Homework Problem 34 in Chapter 1 (Mechanics) analyzes the ascent of sap in trees, and we briefly mention photosynthesis in Chapter 3 (Systems of Many Particles). I suppose our discussion of Robert Brown’s observation of the random motion of pollen particles counts as botany, but just barely. Chapter 8 (Biomagnetism) is surprisingly rich in plant examples, with both magnetotactic and biomagnetic signals from algae. But on the whole, our book talks about the physics of animals, and especially humans. I mean, really, who cares about plants?

Plant Physics, by Karl Niklas and Hans-Christof Spatz.
Plant Physics, by
Karl Niklas and Hans-Christof Spatz.
Guess what? Some people care very much about plants! Karl Niklas and Hanns-Christof Spatz have written a book titled Plant Physics. What is it about? In many ways, it is IPMB redone with only plant examples. Their preface states
This book has two interweaving themes—one that emphasizes plant biology and another that emphasizes physics. For this reason, we have called it Plant Physics. The basic thesis of our book is simple: plants cannot be fully understood without examining how physical forces and processes influence their growth, development, reproduction, and evolution….This book explores…many…insights that emerge when plants are studied with the aid of physics, mathematics, engineering, and chemistry. Much of this exploration dwells on the discipline known as solid mechanics because this has been the focus of much botanical research. However, Plant Physics is not a book about plant solid mechanics. It treats a wider range of phenomena that traditionally fall under the purview of physics, including fluid mechanics, electrophysiology, and optics. It also outlines the physics of physiological processes such as photosynthesis, phloem loading, and stomatal opening and closing.
The chapter titles in Plant Physics overlap with topics in IPMB, such as Chapter 4 (The Mechanical Behavior of Materials), Chapter 6 (Fluid Mechanics), and Chapter 7 (Plant Electrophysiology). I found the mathematical level of the book to be somewhat lower than IPMB, and probably closer to Denny’s Air and Water. (Interestingly, they did not cite Air and Water in their Section 2.3, Living in Water Versus Air, but they do cite another of Denny’s books, Biology and the Mechanics of the Wave-Swept Environment.) The differences between air and water plays a key role in plant life: “It is very possible that the colonization of land by plant life was propelled by the benefits of exchanging a blue and often turbid liquid for an essentially transparent mixture of gasses.” The book discusses diffusion, the Reynold’s number, chemical potential, Poiseuille flow, and light absorption. Chapter 3 is devoted to Plant Water Relations, and contains an example that serves as a model for how physics can play a role in biology. The opening and closing of stomata (“guard cells”) in leaves involves diffusion, osmotic pressure, feedback, mechanics, and optics. Fluid flow through both the xylem (transporting water from the roots to the leaves) and phloem (transporting photosynthetically produced molecules from the leaves to the rest of the plant) are discussed. Biomechanics plays a larger role in Plant Physics than in IPMB, and at the start of Chapter 4 the authors explain why.
The major premise of this book is that organisms cannot violate the fundamental laws of physics. A corollary to this premise is that organisms have evolved and adapted to mechanical forces in a manner consistent with the limits set by the mechanical properties of the materials out of which they are constructed…We see no better expression of these assertions that when we examine how the physical properties of different plant materials influence the mechanical behavior of plants.
Russ and I discuss Poisson’s ratio in a homework problem in Chapter 1. Niklas and Spatz give a nice example of how a large Poisson’s ratio can arise when a cylindrical cell has inextensible fibers in its cell wall that follow a spiral pattern. 
Values [of the Poisson’s ratio] can be very different [from isotropic materials] for composite biological materials such as most tissues, for which Poisson’s ratios greater than 1.0 can be found. A calculation presented in box 4.2 shows that in a sclerenchyma cell, in which practically inextensible cellulose microfibers provide the strengthening material in the cell wall, the Poisson’s ratio strongly depends on the microfibrillar angle; that is, the angle between fibers and the longitudinal axis of the cell.
Given my interest in bioelectric phenomena, I was especially curious about the chapter on Plant Electrophysiology (Chapter 7). The authors derive the Nernst-Planck equation, and the Goldman equation for the transmembrane potential. Interestingly, plants contain potassium and calcium ion channels, but no sodium channels. Many plants have cells that fire action potentials, but the role of the sodium channel for excitation is replaced by a calcium-dependent chloride channel. These are slowly propagating waves; Niklas and Spatz report conduction velocities of less than 0.1 m/s, compared to propagation in a large myelinated human axon, which can reach up to 100 m/s. Patch clamp recordings are more difficult in plant than in animal cells (plants have a cell wall in addition to a cell membrane). Particularly interesting to me were the gravisensitive currents in Lepidium sativum roots. The distribution of current is determined by the orientation of the root in a gravitational field.

Botanists need physics just as much as zoologists do. Plants are just one more path leading from physics to biology.

For those wanting to learn more, my colleague at Oakland University, Steffan Puwal, plans to offer a course in Plant Physics in the winter 2015 semester.

Friday, May 30, 2014

Pierre Auger and Lise Meitner

Last week in this blog, I discussed Auger electrons and their role in determining the radiation dose to biological tissue. This week, I would like to examine a bit of history behind the discovery of Auger electrons.

Auger electrons are named for Pierre Auger (1899–1993), a French physicist. Lars Persson discusses Auger’s life and work in a short biographical article (Acta Oncologica, Volume 35, Pages 785–787, 1996)
From the onset of his scientific work in 1922 Pierre Auger took an interest in the cloud chamber method discovered by Wilson and applied it to studying the photoelectric effect produced by x-rays on gas atoms. The Wilson method provided him with the most direct means of obtaining detailed information on the photoelectrons produced, since their trajectories could be followed when leaving the atom that had absorbed the quantum of radiation. He filled the chamber with hydrogen, which has a very low x-ray absorption coefficient, and a small proportion of highly absorbent and chemically neutral heavy gases, such as krypton and xenon. Auger observed some reabsorption in the gas, but most often found that the expected electron trajectory started from the positive ion itself. Numerous experiments enabled Auger to show that the phenomenon is frequent and amounts to non-radiactive transitions among the electrons of atoms ionized in depth. This phenomenon was named the auger effect, and the corresponding electrons auger electrons. His discovery was published in the French scientific journal Comptes Rendus as a note titled “On secondary beta-rays produced in a gas by x-rays” (1925; 180: 65–8). He was awarded several scientific prizes and was also a nominee for the Nobel Prize in physics which however, he never received. He was a member of the French Academy of Science. Pierre Auger was certainly one of the great men who created the 20th century in science.
Lise Meitner: A Life in Physics, by Ruth Lewin Sime, with Intermediate Physics for Medicine and Biology.
Lise Meitner:
A Life in Physics,
by Ruth Lewin Sime.
What is most interesting to me about the discovery of Auger electrons is that Auger may have been scooped by one of my favorite physicists, Lise Meitner (1878–1968). I didn’t think I would have the opportunity to discuss Meitner in a blog about physics in medicine and biology, and her name never appears in the 4th edition of Intermediate Physics for Medicine and Biology. But the discovery of Auger electrons gives me an excuse to tell you about her. In the book Lise Meitner: A Life in Physics, Ruth Lewin Sime writes about Meitner’s research on UX1 (now known to be the isotope thorium-234)
According to Meitner, the primary process was simply the emission of a decay electron from the nucleus. In UX1 she believed there was no nuclear gamma radiation at all. Instead the decay electron directly ejected a K shell electron, an L electron dropped into the vacancy, and the resultant Kα radiation was mostly reabsorbed to eject L, M, or N electrons from their orbits, all in the same atom. The possibility of multiple transitions without the emission of radiation had been discussed theoretically; Meitner was the first to observe and describe such radiationless transitions. Two years later, Pierre Auger detected the short heavy tracks of the ejected secondary electrons in a cloud chamber, and the effect was named for him. It has been suggested that the “Auger effect” might well have been the “Meitner effect” or at least the “Meitner-Auger effect” had she described it with greater fanfare, but in 1923 it was only part of a thirteen-page article whose main thrust was the beta spectrum of UX1 and the mechanism of its decay.
On the other hand, for an argument in support of Auger’s priority, see Duparc, O. H. (2009) “Pierre Auger – Lise Meitner: Comparative Contributions to the Auger Effect,” International Journal of Materials Research Volume 100, Pages 1162–1166.

The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
Meitner is best know for her work on nuclear fission, described so eloquently by Richard Rhodes in his masterpiece The Making of the Atomic Bomb. Meitner was an Austrian physicist of Jewish descent working in Germany with Otto Hahn. After the Anschluss, Hitler planned to expel Jewish scientists from their academic positions, but also forbade their emigration. With the help of her Dutch colleague Dirk Coster (who is mentioned in IPMB because of Coster-Kronig transitions), she slipped out of Berlin in July 1938. Rhodes writes
Meitner left with Coster by train on Saturday morning. Nine years later she remembered the grim passage as if she had traveled alone: “I took a train for Holland on the pretext that I wanted to spend a week’s vacation. At the Dutch border, I got the scare of my life when a Nazi military patrol of five men going through the coaches picked up my Austrian passport, which had expired long ago. I got so frightened, my heart almost stopped beating. I knew that the Nazis had just declared open season on Jews, that the hunt was on. For ten minutes I sat there and waited, ten minutes that seemed like so many hours. Then one of the Nazi officials returned and handed me back the passport without a word. Two minutes later I descended on Dutch territory, where I was met by some of my Holland colleagues.”
Even better reading is Rhodes’s description of Meitner’s fateful December 1938 walk in the woods with her nephew Otto Frisch, during which they sat down on a log, worked out the mechanism of nuclear fission, and correctly interpreted Hahn’s experimental data. Go buy his book and enjoy the story. Also, you can listen to Ruth Lewin Sime talk about Meitner’s life and work here.

Listen to Ruth Lwein talk about Lise Meitner’s life.

Friday, May 23, 2014

The Amazing World of Auger Electrons

When analyzing how ionizing radiation interacts with biological tissue, one important issue is the role of Auger electrons. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce Auger electrons in Chapter 15 (Interaction of Photons and Charged Particles with Matter). An X-ray or charged particle ionizes an atom, leaving a hole in the electron shell.
The hole in the shell can be filled by two competing processes: a radiative transition, in which a photon is emitted as an electron falls into the hole from a higher level, or a nonradiative or radiationless transition, such as the emission of an Auger electron from a higher level as a second electron falls from a higher level to fill the hole.
We consider Auger electrons again in Chapter 17 (Nuclear Physics and Nuclear Medicine). In some cases, a cascade of relatively low energy electrons are produced by one ionizing event.
The Auger cascade means that several of these electrons are emitted per transition. If a radionuclide is in a compound that is bound to DNA, the effect of several electrons released in the same place is to cause as much damage per unit dose as high-LET [linear energy transfer] radiation….Many electrons (up to 25) can be emitted for one nuclear transformation, depending on the decay scheme [Howell (1992)]. The electron energies vary from a few eV to a few tens of keV. Corresponding electron ranges are from less than 1 nm to 15 μm. The diameter of the DNA double helix is about 2 nm…When it [the radionuclide emitting Auger electrons] is bound to the DNA, survival curves are much steeper, as with the α particles in Fig. 15.32 (RBE [relative biological effectiveness] ≈ 8)
The Amazing World of Auger Electrons, by  Amin Kassis (International Journal of Radiation Biology, 80: 789-803), superimposed on the cover of Intermediate Physics for Medicine and Biology.
“The Amazing World
of Auger Electrons.”
In IPMB, Russ and I cite a paper by Amin Kassis with the wonderful title “The Amazing World of Auger Electrons” (International Journal of Radiation Biology, Volume 80, Pages 789–803). Kassis begins
In 1925, a 26-year-old French physicist named Pierre Victor Auger published a paper describing a new phenomenon that later became known as the Auger effect (Auger 1925). He reported that the irradiation of a cloud chamber with low-energy, X-ray photons results in the production of multiple electron tracts and concluded that this event is a consequence of the ejection of inner-shell electrons from the irradiated atoms, the creation of primary electron vacancies within these atoms, a complex series of vacancy cascades composed of both radiative and nonradiative transitions, and the ejection of very low-energy electrons from these atoms. In later studies, it was recognized that such low-energy electrons are also ejected by many radionuclides that decay by electron capture (EC) and/or internal conversion (IC). Both of these processes introduce primary vacancies in the inner electronic shells of the daughter atoms which are rapidly filled up by a cascade of electron transitions that move the vacancy towards the outermost shell. Each inner-shell electron transition results in the emission of either a characteristic atomic X-ray photon or low-energy and short-range monoenergetic electrons (collectively known as Auger electrons, in honor of their discoverer).
Typically an atom undergoing EC and/or IC emits several electrons with energies ranging from a few eV to approximately 100 keV. Consequently, the range of Auger electrons in water is from a fraction of a nanometer to several hundreds of micrometers (table 1). The ejection of these electrons leaves the decaying atoms transiently with a high positive charge and leads to the deposition of highly localized energy around the decay site. The dissipation of the potential energy associated with the high positive charge and its neutralization may, in principle, also act concomitantly and be responsible for any observed biological effects. Finally, it is important to note that unlike energetic electrons, whose linear energy transfer (LET) is low (~0.2 keV/mm) along most of their rather long linear path (up to one cm in tissue), i.e. ionizations occur sparingly, the LET of Auger electrons rises dramatically to ~26 keV/mm (figure 1) especially at very low energies (35–550 eV) (Cole 1969) with the ionizations clustered within several cubic nanometers around the point of decay. From a radiobiological prospective, it is important to recall that the biological functions of mammalian cells depend on both the genomic sequences of double- stranded DNA and the proteins that form the nucleoprotein complex, i.e. chromatin, and to note that the organization of this polymer involves many structural level compactions (nucleosome, 30-nm chromatin fiber, chromonema fiber, etc.) [see Fig. 16.33 in IPMB] whose dimensions are all within the range of these high-LET (8–26 keV/mm), low-energy low-energy (less than 1.6 keV), short-range (less than 130 nm) electrons.
An example of an isotope that emits a cascade of Auger electrons is iodine-125. It has a half-life of 59 days, and decays to an excited state of tellurium-125. The atom deexcites by various mechanisms, including up to 21 Auger electrons with energies of 50 to 500 eV each. Kassis says
Among all the radionuclides that decay by EC and/or IC, the Auger electron emitter investigated most extensively is iodine-125. Because these processes lead to the emission of electrons with very low energies, early studies examined the radiotoxicity of iodine-125 in mammalian cells when the radioelement was incorporated into nuclear DNA consequent to in vitro incubations of mammalian cells with the thymidine analog 5-[125I]iodo-2’-deoxyuridine (125IdUrd). These studies demonstrated that the decay of DNA-incorporated 125I is highly toxic to mammalian cells.
I find it useful to compare 125I with 131I, another iodine radioisotope used in nuclear medicine. 131I undergoes beta decay, followed by emission of a gamma ray. Both the high energy electron from beta decay (up to 606 keV) and the gamma ray (364 keV) can travel millimeters in tissue, passing through many cells. In contrast, 125I releases its cascade of Auger electrons, resulting in extensive damage over a very small distance.

Civil War buffs might compare these two isotopes to the artillery ammunition of the 1860s. 131I is like a cannon firing shot (solid cannon balls), whereas 125I is like firing canister. If you are trying to take out an enemy battery 1000 yards away, you need shot. But if you are trying to repulse an enemy infantry charge that is only 10 yards away, you use canister or, better, double canister. 131I is shot, and 125I is double canister.

Friday, May 16, 2014

Paul Callaghan (1947-2012)

Principles of Nuclear Magnetic Resonance Microscopy, by Pual Callaghan, superimposed on Intermediate Physics for Medicine and Biology.
Principles of Nuclear
Magnetic Resonance Microscopy,
by Pual Callaghan.
Russ Hobbie and I are hard at work on the 5th edition of Intermediate Physics for Medicine and Biology, which has me browsing through many books—some new and some old classics—looking for appropriate texts to cite. The one I’m looking at now is Paul Callaghan’s Principles of Nuclear Magnetic Resonance Microscopy (Oxford University Press, 1991). Callaghan was the PhD mentor of my good friend and Oakland University colleague Yang Xia. You probably won’t be surprised to know that, like Callaghan, Xia is a MRI microscopy expert. He uses the technique to study the ultrastructure of cartilage at a resolution of tens of microns. Xia assigns Callaghan’s book when he teaches Oakland’s graduate MRI class.

Callaghan gives a brief history of MRI on the first page of his book.
Until the discovery of X-rays by Roentgen in 1895 our ability to view the spatial organization of matter depended on the use of visible light with our eyes being used as primary detectors. Unaided, the human eye is a remarkable instrument, capable of resolving separations of 0.1 mm on an object placed at the near point of vision and, with bifocal vision, obtaining a depth resolution of around 0.3 mm. However, because of the strong absorption and reflection of light by most solid materials, our vision is restricted to inspecting the appearance of surfaces. “X-ray vision” gave us the capacity, for the first time, to see inside intact biological, mineral, and synthetic materials and observe structural features.

The early X-ray photographs gave a planar representation of absorption arising from elements right across the object. In 1972 the first X-ray CT scanner was developed with reconstructive tomography being used to produce a two-dimensional absorption image from a thin axial layer.1 The mathematical methods used in such image reconstruction were originally employed in radio astronomy by Bracewell2 in 1956 and later developed for optical and X-ray applications by Cormack3 in 1963. A key element in the growth of tomographic techniques has been the availability of high speed digital computers. These machines have permitted not only the rapid computation of the image from primary data but have also made possible a wide variety of subsequent display and processing operations. The principles of reconstructive tomography have been applied widely in the use of other radiations. In 1973, Lauterbur4 reported the first reconstruction of a proton spin density map using nuclear magnetic resonance (NMR), and in the same year Mansfield and Grannell5 independently demonstrated the Fourier relationship between the spin density and the NMR signal acquired in the presence of a magnetic field gradient. Since that time the field has advanced rapidly to the point where magnetic resonance imaging (MRI) is now a routine, if expensive, complement to X-ray tomography in many major hospitals. Like X-ray tomography, conventional MRI has a spatial resolution coarser than that of the unaided human eye with volume elements of order (1 mm)3 or larger. Unlike X-ray CT however, where resolution is limited by the beam collimation, MRI can in principle achieve a resolution considerably finer than 0.1 mm and, where the resolved volume elements are smaller than (0.1 mm)3, this method of imaging may be termed microscopic.

1. Hounsfield, G. N. (1973). British Patent No. 1283915 (1972) and Br. J. Radiol. 46, 1016.

2. Bracewell, R. N. (1956). Austr. J. Phys. 9, 109–217.

3. Cormack, A. M. (1963). J. Appl. Phys. 34, 2722–7.

4. Lauterbur, P. C. (1973). Nature 242, 190.

5. Mansfield, P. and Grannell, P. K. (1973). J. Phys. C 6, L422.
Callaghan was an excellent teacher, and he prepared a series of videos about MRI. You can watch them for free here. They really are “must see” videos for people wanting to understand nuclear magnetic resonance. He was a professor at Massey University in Wellington, New Zealand. In 2011 he was named New Zealander of the Year, and you can hear him talk about scientific innovation in New Zealand here.

Callaghan died about two years ago. You can see his obituary here, here and here. Finally, here you can listen to an audio recording of Yang Xia speaking about his mentor at the Professor Sir Paul Callaghan Symposium in February 2013.

Video 1

Video2

Video3

Video 4

Video 5

Video 6

Video 7

Video 8

Video 9a

Video 9b

Video 10

Friday, May 9, 2014

Celebrating the 60th Anniversary of the IEEE TBME

The cover of the journal IEEE Transactions on Biomedical Engineering.
IEEE Transactions on
Biomedical Engineering.
One journal that I have published in several times is the IEEE Transactions on Biomedical Engineering. The May issue of IEEE TBME celebrates the journal’s 60th anniversary. Bin He, editor-in-chief, writes in his introductory editorial
THE IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING (TBME) is celebrating 60 years of publishing biomedical engineering advances. TBME was one of the first journals devoted to biomedical engineering. Thanks to IEEE, all of the TBME papers since January 1964 have been archived and are available to the public. In this special issue, celebrating TBME’s 60th anniversary, we have invited 20 leading groups in biomedical engineering research to contribute review articles. Each article reviews state of the art and trends in an area of biomedical engineering research in which the authors have made important original contributions. Due to limited space, it is not our intention to cover all areas of biomedical engineering research in this special issue, but instead to provide coverage of major subfields within the discipline of biomedical engineering, including biomedical imaging, neuroengineering, cardiovascular engineering, cellular and tissue engineering, biomedical sensors and instrumentation, biomedical signal processing, medical robotics, bioinformatics, and computational biology. These review articles are witness to the development of the field of biomedical engineering, and also reflect the role that TBME has played in advancing the field of biomedical engineering over the past 60 years…
These comprehensive and timely reviews reflect the breadth and depth of biomedical engineering and its impact to engineering, biology, medicine, and the larger society. These reviews aim to serve the readers in gaining insights and an understanding of particular areas in biomedical engineering. Many articles also share perspectives from the authors on future trends in the field. While the intention of this special issue was not to cover all research programs in biomedical engineering, these 20 articles represent a collection of state-of-the-art reviews that highlight exciting and significant research in the field of biomedical engineering and will serve TBME readers and the biomedical engineering community in years to come.
Biomedical Engineering can be thought of as an applied version of medical and biological physics, and many of the topics Russ Hobbie and I discuss in the 4th edition of Intermediate Physics for Medicine and Biology are important to biomedical engineers. We cite nineteen IEEE TBME papers in IPMB:
Tucker, R. D., and O. H. Schmitt (1978) “Tests for Human Perception of 60 Hz Moderate Strength Magnetic Fields,” IEEE Trans. Biomed. Eng. Volume 25, Pages 509–518.

Wiley, J. D., and J. G. Webster (1982) “Analysis and Control of the Current Distribution under Circular Dispersive Electrodes,” IEEE Trans. Biomed. Eng. Volume 29, Pages 381–385. 

Cohen, D., I. Nemoto, L. Kaufman, and S. Arai (1984) “Ferrimagnetic Particles in the Lung Part II: The Relaxation Process,” IEEE Trans. Biomed. Eng. Volume 31, Pages 274–285.

Stark, L. W. (1984) “The Pupil as a Paradigm for Neurological Control Systems,” IEEE Trans. Biomed. Eng. Volume 31, Pages 919–924. 

Barach, J. P., B. J. Roth, and J. P. Wikswo (1985) “Magnetic Measurements of Action Currents in a Single Nerve Axon: A Core Conductor Model,” IEEE Trans. Biomed. Eng. Volume 32, Pages 136–140.

Geddes, L. A., and J. D. Bourland (1985) “The Strength-Duration Curve,” IEEE Trans. Biomed. Eng. Volume 32, Pages 458–459. 

Stanley, P. C., T. C. Pilkington, and M. N. Morrow (1986) “The Effects of Thoracic Inhomogeneities on the Relationship Between Epicardial and Torso Potentials,” IEEE Trans. Biomed. Eng. Volume 33, Pages 273–284. 
Gielen, F. L. H., B. J. Roth and J. P. Wikswo, Jr. (1986) “Capabilities of a Toroid-Amplifier System for Magnetic Measurements of Current in Biological Tissue,” IEEE Trans. Biomed. Eng. Volume 33, Pages 910–921. 

Pickard, W. F. (1988) “A Model for the Acute Electrosensitivity of Cartilaginous Fishes,” IEEE Trans. Biomed. Eng. Volume 35, Pages 243–249. 

Purcell, C. J., G. Stroink, and B. M. Horacek (1988) “Effect of Torso Boundaries on Electrical Potential and Magnetic Field of a Eipole,” IEEE Trans. Biomed. Eng. Volume 35, Pages 671–678.

Trayanova, N., C. S. Henriquez, and R. Plonsey (1990) “Limitations of Approximate Solutions for Computing Extracellular Potential of Single Fibers and Bundle Equivalents,” IEEE Trans. Biomed. Eng. Volume 37, Pages 22–35.

Voorhees, C. R., W. D. Voorhees III, L. A. Geddes, J. D. Bourland, and M. Hinds (1992) “The Chronaxie for Myocardium and Motor Nerve in the Dog with Surface Chest Electrodes,” IEEE Trans. Biomed. Eng. Volume 39, Pages 624–628.

Tan, G. A., F. Brauer, G. Stroink, and C. J. Purcell (1992) “The Effect of Measurement Conditions on MCG Inverse Solutions,” IEEE Trans. Biomed. Eng. Volume 39, Pages 921–927.

Roth, B. J. and J. P. Wikswo, Jr. (1994) “Electrical Stimulation of Cardiac Tissue: A Bidomain Model with Active Membrane Properties,” IEEE Trans. Biomed. Eng. Volume 41, Pages 232–240.

Tai, C., and D. Jiang (1994) “Selective Stimulation of Smaller Fibers in a Compound Nerve Trunk with Single Cathode by Rectangular Current Pulses,” IEEE Trans. Biomed. Eng. Volume 41, Pages 286–291.

Kane, B. J., C. W. Storment, S. W. Crowder, D. L. Tanelian, and G. T. A. Kovacs (1995) “Force-Sensing Microprobe for Precise Stimulation of Mechanoreceptive Tissues,” IEEE Trans. Biomed. Eng. Volume 42, Pages 745–750. 
Esselle, K. P., and M. A. Stuchly (1995) “Cylindrical Tissue Model for Magnetic Field Stimulation of Neurons: Effects of Coil Geometry,” IEEE Trans. Biomed. Eng. Volume 42, Pages 934–941. 

Roth, B. J. (1997) “Electrical Conductivity Values Used with the Bidomain Model of Cardiac Tissue,” IEEE Trans. Biomed. Eng. Volume 44, Pages 326–328.

Roth, B. J., and M. C. Woods (1999) “The Magnetic Field Associated with a Plane Wave Front Propagating through Cardiac Tissue,” IEEE Trans. Biomed. Eng. Volume 46, Pages 1288–1292.
One endearing feature of the IEEE TBME is that at the end of an article they publish a picture and short bio of each author. Over the years, my goal has been to publish my entire CV, piece by little piece, in these short bios. Below is the picture and bio from my very first published paper, which appeared in IEEE TBME [Barach, Roth, and Wikswo (1985), cited above].

Short bio of Brad Roth, published in the IEEE Transactions on Biomedical Engineering.

Friday, May 2, 2014

Research and Education at the Crossroads of Biology and Physics

The May issue of the American Journal of Physics (my favorite journal) is a “theme issue” devoted to Research and Education at the Crossroads of Biology and Physics. In their introductory editorial, guest editors Mel Sabella and Matthew Lang outline their goals, which are similar to the objectives Russ Hobbie and I have for the 4th edition of Intermediate Physics for Medicine and Biology.
…there is often a disconnect between biology and physics. This disconnect often manifests itself in high school and college physics instruction as our students rarely come to understand how physics influences biology and how biology influences physics. In recent years, both biologists and physicists have begun to recognize the importance of cultivating stronger connections in these fields, leading to instructional innovations. One call to action comes from the National Research Council’s report, BIO2010, which stresses the importance of quantitative and computational training for future biologists and cites that sufficient expertise in physics is crucial to addressing complex issues in the life sciences. In addition, physicists who are now exploring biological contexts in instruction need the expertise of biologists. It is clear that biologists and physicists both have a great deal to offer each other and need to develop interdisciplinary workspaces…

This theme issue on the intersection of biology and physics includes papers on new advances in the fields of biological physics, new advances in the teaching of biological physics, and new advances in education research that inform and guide instruction. By presenting these strands in parallel, in a single issue, we hope to support the reader in making connections, not only at the intersection of biology and physics but also at the intersection of research, education, and education research. Understanding these connections puts us, as researchers and physics educators, in a better position to understand the central questions we face…

The infusion of Biology into Physics and Physics into Biology provides exciting new avenues of study that can inspire and motivate students, educators, and researchers at all levels. The papers in this issue are, in many ways, a call to biologists and physicists to explore this intersection, learn about the challenges and obstacles, and become excited about new areas of physics and physics education. We invite you to read through these articles, reflect, and discuss this complex intersection, and then continue the conversation at the June 2014 Gordon Research Conference titled, “Physics Research and Education: The Complex Intersection of Biology and Physics.”
And guess who has an article in this special issue? Yup, Russ and I have a paper titled “A Collection of Homework Problems About the Application of Electricity and Magnetism to Medicine and Biology.”
This article contains a collection of homework problems to help students learn how concepts from electricity and magnetism can be applied to topics in medicine and biology. The problems are at a level typical of an undergraduate electricity and magnetism class, covering topics such as nerve electrophysiology, transcranial magnetic stimulation, and magnetic resonance imaging. The goal of these problems is to train biology and medical students to use quantitative methods, and also to introduce physics and engineering students to biological phenomena.
Regular readers of this blog know that a “hobby” of mine (pun intended, Russ) is to write new homework problems to go along with our book. Some of the problems in our American Journal of Physics paper debuted in this blog. I believe that a well-crafted collection of homework problems is essential for learning biological and medical physics (remember, for them to be useful you have to do your homework). I hope you will find the problems we present in our paper to be “well-crafted”. We certainly had fun writing them. My biggest concern with our AJP paper is that the problems may be too difficult for an introductory class. The “I” in IPMB stands for “intermediate”, not “introductory”. However, most of the AJP theme issue is about the introductory physics class. Oh well; one needs to learn biological and medical physics at many levels, and the intermediate level is our specialty. If only our premed students would reach the intermediate level (sigh)….

Russ and I are hard at work on the 5th edition of our book, where many of the problems from our paper, along with additional new ones, will appear (as they say, You Ain’t Seen Nothing Yet!).

Anyone interested in teaching biological and medical physics should have a look at this AJP theme issue. And regarding that Gordon Research Conference that Sabella and Lang mention, I’m registered and have purchased my airline tickets! It should be fun. If you are interested in attending, the registration deadline is May 11 (register here). You better act fast.

Friday, April 25, 2014

Bernard Cohen and the Risk of Low Level Radiation

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the work of physicist Bernard Cohen (1924-2012). In Chapter 16 (Medical Use of X Rays), we describe the dangers of radon gas, and show a picture from a 1995 study by Cohen studying lung cancer mortality as a function of radon gas level (our Fig. 16.57). Interestingly, the mortality rate goes down as radon exposure increases: exactly the opposite of what you would expect if you believed radiation exposure from radon caused lung cancer. In this blog entry, I consider two questions: who is this Bernard Cohen, and how is his work perceived today?

Cohen was a professor of physics at the University of Pittsburgh. An obituary published by the university states
Bernard Cohen was born (1924) and raised in Pittsburgh, PA and did his undergraduate work at Case (now Case Western Reserve Univ.). After service as an engineering officer with the U.S. Navy in the Pacific and the China coast during World War II, he did graduate work in Physics at Carnegie Tech (now Carnegie-Mellon Univ.), receiving a Ph.D. in 1950 with a thesis on “Experimental Studies of High Energy Nuclear Reactions” – at that time “high energy” was up to 15 MeV. His next eight years were at Oak Ridge National Laboratory, and in 1958 he moved to the University of Pittsburgh where he spent the remainder of his career except for occasional leaves of absence. Until 1974, his research was on nuclear structure and nuclear reactions .… His nuclear physics research was recognized by receipt of the American Physical Society Tom Bonner Prize (1981), and his election as Chairman of the A.P.S. Division of Nuclear Physics (1974-75).

In the early 1970s, he began shifting his research away from basic physics into applied problems. Starting with trace element analysis utilizing nuclear scattering and proton and X-ray induced X-ray emission (PIXE and XRF) to solve various practical problems, and production of fluorine-18 for medical applications, he soon turned his principal attention to societal issues on energy and the environment. For this work he eventually received the Health Physics Society Distinguished Scientific Achievement Award, the American Nuclear Society Walter Zinn Award (contributions to nuclear power), Public Information Award, and Special Award (health impacts of low level radiation), and was elected to membership in National Academy of Engineering; he was also elected Chairman of the Am. Nuclear Society Division of Environmental Sciences (1980-81). His principal work was on hazards from plutonium toxicity, high level waste from nuclear power (his first papers on each of these drew over 1000 requests for reprints), low level radioactive waste, perspective on risks in our society, society’s willingness to spend money to avert various type risks, nuclear and non-nuclear targets for terrorism, health impacts of radon from uranium mining, radiation health impacts from coal burning, impacts of radioactivity dispersed in air (including protection from being indoors), in the ground, in rivers, and in oceans, cancer and genetic risks from low level radiation, discounting in assessment of future risks from buried radioactive waste, physics of the reactor meltdown accident, disposal of radioactivity in oceans, the iodine-129 problem, irradiation of foods, hazards from depleted uranium, assessment of Cold War radiation experiments on humans, etc.

In the mid-1980s, he became deeply involved in radon research, developing improved detection techniques and organizing surveys of radon levels in U.S. homes accompanied by questionnaires from which he determined correlation of radon levels with house characteristics, environmental factors, socioeconomic variables, geography, etc. These programs eventually included measurements in 350,000 U.S. homes. From these data and data collected by EPA and various State agencies, he compiled a data base of average radon levels in homes for 1600 U.S. counties and used it to test the linear-no threshold theory of radiation-induced cancer; he concluded that that theory fails badly, grossly over-estimating the risk from low level radiation. This finding was very controversial, and for 10 years after his retirement in 1994, he did research extending and refining his analysis and responding to criticisms.
Although he died two years ago, his University of Pittsburgh website is still maintained, and there you can find a list of many of his articles. The first in the list is the article from which our Fig. 16.57 comes from. I particularly like the 4th item in the list, his catalog of risks we face every day. You can find the key figure here. Anyone interested in risk assessment should have a look.

No doubt Cohen’s work is controversial. In IPMB, we cite one debate with Jay Lubin, including articles in the journal Health Physics with titles
Cohen, B. L. (1995) “Test of the Linear-No Threshold Theory of Radiation Carcinogenesis for Inhaled Radon Decay Products.”

Lubin, J. H. (1998) “On the Discrepancy Between Epidemiologic Studies in Individuals of Lung Cancer and Residential Radon and Cohen’s Ecologic Regression.”

Cohen, B. L. (1998) “Response to Lubin’s Proposed Explanations of the Discrepancy.”

Lubin, J. H. (1998) “Rejoinder: Cohen’s Response to ‘On the Discrepancy Between Epidemiologic Studies in Individuals of Lung Cancer and Residential Radon and Cohen’s Ecologic Regression.’”

Cohen, B. L. (1999) “Response to Lubin’s Rejoinder.”

Lubin, J. H. (1999) “Response to Cohen’s Comments on the Lubin Rejoinder.”
Who says science is boring!

What is the current opinion of Cohen’s work. As I see it, there are two issues to consider: 1) the validity of the specific radon study performed by Cohen, and 2) the general correctness of the linear-no threshold model for radiation risk. About Cohen’s study, here is what the World Health Organization had to say in a 2001 publication.
This disparity is striking, and it is not surprising that some researchers have accepted these data at face value, taking them either as evidence of a threshold dose for high-LET radiation, below which no effect is produced, or as evidence that exposure of the lung to relatively high levels of natural background radiation reduces the risk for lung cancer due to other causes. To those with experience in interpreting epidemiological observations, however, neither conclusion can be accepted (Doll, 1998). Cohen’s geographical correlation study has intrinsic methodological difficulties (Stidley and Samet, 1993, 1994) which hamper any interpretation as to causality or lack of causality (Cohen, 1998; Lubin, 1998a,b; Smith et al., 1998; BEIR VI). The probable explanation for the correlation is uncontrolled confounding by cigarette smoking and inadequate assessment of the exposure of a mobile population such as that of the USA.
Needless to say, Cohen did not accept these conclusions. Honestly, I have not looked closely enough into the details of this particular study to provide any of my own insights.

On the larger question of the validity of the linear no-threshold model, I am a bit of a skeptic, but I realize the jury is still out. I have discussed the linear no-threshold model before in this blog here, here, here, and here. The bottom line is shown in our Fig. 16.58, which plots relative risk versus radon concentration for low doses of radiation; the error bars are so large that the data could be said to be consistent with almost any model. It is devilishly hard to get data about very low dose radiation effects.

Right or wrong, you have to admire Bernard Cohen. He made many contributions throughout his long and successful career, and he defended his opinions about low-level radiation risk with courage and spunk. (And, as the 70th anniversary of D-Day approaches, we should all honor his service in World War II). If you want to learn more about Cohen, see his Health Physics Society obituary here, another obituary here, and an interview about nuclear energy here. For those of you who want to hear it straight from the horse’s mouth, you can watch and listen to Cohen's own words in these videos.


Friday, April 18, 2014

The Periodic Table in IPMB

The periodic table of the elements summarizes so much of science, and chemistry in particular. Of course, the periodic table is crucial in biology and medicine. How many of the over one hundred elements do Russ Hobbie and I mention in the 4th edition of Intermediate Physics for Medicine and Biology? Surveying all the elements is too big of a job for one blog entry, so let me consider just the first twenty elements: hydrogen through calcium. How many of these appear in IPMB?
1. Hydrogen. Hydrogen appears many places in IPMB, including Chapter 14 (Atoms and Light) that describes the hydrogen energy levels and emission spectrum.

2. Helium. Liquid helium is mentioned when describing SQUID magnetometers in Chapter 8 (Biomagnetism), and the alpha particle (a helium nucleus) plays a major role in Chapter 17 (Nuclear Physics and Nuclear Medicine).

3. Lithium. Chapter 7 (The Exterior Potential and the Electrocardiogram) mentions lithium-iodide battery that powers most pacemakers, and Chapter 16 (Medical Use of X-rays) mentions lithium-drifted germanium x-ray detectors.

4. Beryllium. I can’t find beryllium anywhere in IPMB.

5. Boron. Boron neutron capture therapy is reviewed in Chapter 16 (Medical Use of X Rays).

6. Carbon. A feedback loop relating the carbon dioxide concentration in the alveoli to the breathing rate is analyzed in Chapter 10 (Feedback and Control).

7. Nitrogen. When working problems about the atmosphere, readers are instructed to consider the atmosphere to be pure nitrogen (rather than only 80% nitrogen) in Chapter 3 (Systems of Many Particles).

8. Oxygen. Oxygen is often mentioned when discussing hemoglobin, such as in Chapter 18 (Magnetic Resonance Imaging) when describing functional MRI.

9. Fluorine. The isotope Fluorine-18, a positron emitter, is used in positron emission tomography (Chapter 17, Nuclear Physics and Nuclear Medicine).

10. Neon. Not present.

11. Sodium. Sodium and sodium channels are essential for firing action potentials in nerves (Chapter 6, Impulses in Nerve and Muscle Cells).

12. Magnesium. Russ and I don’t mention magnesium by name. However, Problem 16 in Chapter 9 (Electricity and Magnetism at the Cellular Level) provides a citation for the mechanism of anomalous rectification in a potassium channel. The mechanism is block by magnesium ions.

13. Aluminum. Chapter 16 (Medical Use of X Rays) tells how sheets of aluminum are used to filter x-ray beams; removing the low-energy photons while passing the high-energy ones.

14. Silicon. Silicon X ray detectors are considered in Chapter 16 (Medical Use of X Rays).

15. Phosphorus. The section on Distances and Sizes that starts Chapter 1 (Mechanics) considers the molecule adenosine triphosphate (ATP), which is crucial for metabolism.

16. Sulfur. The isotope technitium-99m is often combined with colloidal sulfur for use in nuclear medicine imaging (Chapter 17, Nuclear Physics and Nuclear Medicine).

17. Chlorine. Ion channels are described in Chapter 9 (Electricity and Magnetism at the Cellular Level), including chloride ion channels.

18. Argon. In Problem 32 of Chapter 16 (Medical Use of X rays), we ask the reader to calculate the stopping power of electrons in argon.

19. Potassium. The selectivity and voltage dependence of ion channels have been studied using the Shaker potassium ion channel (Chapter 9, Electricity and Magnetism at the Cellular Level).

20. Calcium. After discussing diffusion in Chapter 4 (Transport in an Infinite Medium), in Problem 23 we ask the reader to analyze calcium diffusion when a calcium buffer is present.

Friday, April 11, 2014

Bilinear Interpolation

If you know the value of a variable at a regular array of points (xi,yj), you can estimate its value at intermediate positions (x,y) using an interpolation function. For bilinear interpolation, the function f(x,y) is

f(x,y) = a + b x + c y + d x y

where a, b, c, and d are constants. You can determine these constants by requiring that f(x,y) is equal to the known data at points (xi,yj), (xi+1,yj), (xi,yj+1), and (xi+1,yj+1):

f(xi,yj) = a + b xi + c yj + d xi yj
f(xi+1,yj) = a + b xi+1 + c yj + d xi+1 yj
f(xi,yj+1) = a + b xi + c yj+1 + d xi yj+1
f(xi+1,yj+1) = a + b xi+1 + c yj+1 + d xi+1 yj+1 .

Solving these four equations for the four unknowns a, b, c, and d, plugging those values into the equation for f(x,y), and then doing a bit of algebra gives you

f(x,y) = [ f(xi,yj)  (xi+1 – x) (yj+1 – y) + f(xi+1,yj) (x – xi) (yj+1 – y) 
                            + f(xi,yj+1) (xi+1 – x) (y – yj) + f(xi+1,yj+1) (x – xi) (y – yj) ] /(ΔxΔy)

where xi+1 = xi + Δx and yj+1 = yj + Δy. To see why this makes sense, let x = xi and y = yj. In that case, the last three terms in this expression go to zero, and the first term reduces to f(xi,yj), just as you would want an interpolation function to behave. As you can check for yourself, this is true of all four data points. If you hold y fixed then the function is a linear function of x, and if you hold x fixed then the function is a linear function of y. If you assume y = e x, then the function is quadratic.

If you want to try it yourself, see http://www.ajdesigner.com/phpinterpolation/bilinear_interpolation_equation.php

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce bilinear interpolation in Problem 20 of Chapter 12, in the context of computed tomography. In CT, you obtain the Fourier transform of the image at points in a polar coordinate grid ki, θj. In other words, the points lie on concentric circles in the spatial frequency plane, each of radius ki. In order to compute a numerical two-dimensional Fourier reconstruction to recover the image, one needs the Fourier transform on a Cartesian grid kx,n, ky,m. Thus, one needs to interpolate from data at ki, θj to kx,n, ky,m. In Problem 20, we suggest doing this using bilinear interpolation, and ask the reader to perform a numerical example.

I like bilinear interpolation, because it is simple, intuitive, and often “good enough.” But it is not necessarily the best way to proceed. Tomogrphic methods arise not only in CT but also in synthetic aperture radar (SAR) (see: Munson, D. C., J. D. O’Brien, and W. K. Jenkins (1983) “A Tomographic Formulation of Spotlight-Mode Synthetic Aperture Radar,” Proceedings of the IEEE, Volume 71, Pages 917–925). In their conference proceeding paper “A Comparison of Algorithms for Polar-to-Cartesian Interpolation in Spotlight Mode SAR” (IEEE International Conference on Acoustics, Speech and Signal Processing '85, Volume 10, Pages 1364–1367, 1985), Munson et al. write
Given the polar Fourier samples, one method of image reconstruction is to interpolate these samples to a cartesian grid, apply a 2-D inverse FFT, and to then display the magnitude of the result. The polar-to-cartesian interpolation operation must be of extremely high quality to prevent aliasing . . . In an actual system implementation the interpolation operation may be much more computationally expensive than the FFT. Thus, a problem of considerable importance is the design of algorithms for polar-to-cartesian interpolation that provide a desirable quality/computational complexity tradeoff.
Along the same lines, O’Sullivan (“A Fast Sinc Function Gridding Algorithm for Fourier Inversion in Computer Tomography,” IEEE Trans. Medical Imaging, Volume 4, Pages 200–207, 1985) writes
Application of Fourier transform reconstruction methods is limited by the perceived difficulty of interpolation from the measured polar or other grid to the Cartesian grid required for efficient computation of the Fourier transform. Various interpolation schemes have been considered, such as nearest-neighbor, bilinear interpolation, and truncated sinc function FIR interpolators [3]-[5]. In all cases there is a tradeoff between the computational effort required for the interpolation and the level of artifacts in the final image produced by faulty interpolation.
There has been considerable study of this problem. For instance, see
Stark et al. (1981) “Direct Fourier reconstruction in computer tomography,” IEEE Trans. Acoustics, Speech, and Signal Processing, Volume 29, Pages 237–245.

Moraski, K. J. and D. C. Munson (1991) “Fast tomographic reconstruction using chirp-z interpolation,” 1991 Conference Record of the Twenty-Fifth Asilomar Conference on Signals, Systems and Computers, Volume 2, Pages 1052–1056.
Going into the details of this topic would take me into more deeply into signal processing than I am comfortable with. Hopefully, Problem 20 in IPMB will give you a flavor for what sort of interpolation needs to be done, and the references given in this blog entry can provide an entry to more detailed analysis.

Friday, April 4, 2014

17 Equations that Changed the World

In Pursuit of the Unknown: 17 Equations that Changed the World, by Ian Stewart, superimposed on Intermediate Physics for Medicine and Biology.
In Pursuit of the Unknown:
17 Equations that Changed the World,
by Ian Stewart.
Ian Stewart’s book In Pursuit of the Unknown: 17 Equations that Changed the World “is the story of the ascent of humanity, told through 17 equations.” Of course, my first thought was “I wonder how many of those equations are in the 4th edition of Intermediate Physics for Medicine and Biology?” Let’s see.
1. Pythagorean theorem: a2+b2=c2. In Appendix B of IPMB, Russ Hobbie and I discuss vectors, and quote Pythagoras’ theorem when relating a vector’s x and y components to its magnitude.

2. Logarithms: log(xy)=log(x)+log(y). In Appendix C, we present many of the properties of logarithms, including this sum/product rule as Eq. C6. Log-log plots are discussed extensively in Chapter 2 (Exponential Growth and Decay).

3. Definition of the derivative: df/dt = limit h → 0 (f(t+h)-f(t))/h. We assume the reader has taken introductory calculus (the preface states “Calculus is used without apology”), so we don’t define the derivative or consider what it means to take a limit. However, in Appendix D we present the Taylor series through its first two terms, which is essentially the same equation as the definition of the derivative, just rearranged.

4. Newton’s law of gravity: F = Gm1m2/d2. Russ and I are ruthless about focusing exclusively on physics that has implications for biology and medicine. Almost all organisms live at the surface of the earth. Therefore, we discuss the acceleration of gravity, g, starting in Chapter 1 (Mechanics), but not Newton’s law of Gravity.

5. The square root of minus one: i2 = -1. Russ and I generally avoid complex numbers, but they are mentioned in Chapter 11 (The Method of Least Squares and Signal Analysis) as an alternative way to formulate the Fourier series. We write the equation as i = √-1, which is the same thing as i2 = -1.

6. Euler’s formula for polyhedra: FE + V = 2. We never come close to mentioning it.

7. Normal distribution: P(x) = 1/√(2πσ) exp[-(x-μ)2/2σ2]. Appendix I is about the Gaussian (or normal) probability distribution, which is introduced in Eq. I.4.

8. Wave equation: 2u/∂t2 = c22u/∂x2. Russ and I introduce the wave equation (Eq. 13.5) in Chapter 13 (Sound and Ultrasound).

9. Fourier transform: f(k) = ∫ f(x) e-2πixk dx. In Chapter 11 (The Method of Least Squares and Signal Analysis) we develop the Fourier transform in detail (Eq. 11.57), and then use it in Chapter 12 (Images) to do tomography.

10. Navier-Stokes equation: ρ (∂v/∂t + v ⋅∇ v) = -∇ p + ∇ ⋅ T + f. Russ and I analyze biological fluid mechanics in Chapter 1 (Mechanics), and write down a simplified version of the Navier-Stokes equation in Problem 28.

11. Maxwell’s equations: ∇ ⋅ E = 0, ∇ × E = -1/c H/∂t, ∇ ⋅ H = 0, and ∇ × H = 1/c E/∂t. Chapter 6 (Impulses in Nerve and Muscle Cells), Chapter 7 (The Exterior Potential and the Electrocardiogram), and Chapter 8 (Biomagnetism) discuss each of Maxwell’s equations. In Problem 22 of Chapter 8, Russ and I ask the reader to collect all these equations together. Yes, I own a tee shirt with Maxwell’s equations on it.

12. Second law of thermodynamics: dS ≥ 0. In Chapter 3 (Systems of Many Particles), Russ and I discuss the second law of thermodynamics. We derive entropy from statistical considerations (I would have chosen S = kB lnΩ rather than dS ≥ 0 to sum up the second law). We state in words “the total entropy remains the same or increases,” although we don’t actually write dS ≥ 0.

13. Relativity: E = mc2. We don’t discuss special relativity in much detail, but we do need E = mc2 occasionally, most notably when discussing pair production in Chapter 15 (Interaction of Photons and Charged Particles with Matter).

14. Schrödinger’s equation: i ħ ∂Ψ/∂t = Ĥ Ψ. Russ and I don’t write down or analyze Schrödinger’s equation, but we do mention it by name, particularly at the start of Chapter 3 (Systems of Many Particles).

15. Information theory: H = - Σ p(x) log p(x). Not mentioned whatsoever.

16. Chaos theory: xi+1 = k xi (1-xi). Russ and I analyze chaotic behavior in Chapter 10 (Feedback and Control), including the logistic map xi+1=kxi(1-xi) (Eq. 10.36).

17. Black-Scholes equation: ½ σ2S22V/∂S2 + rS V/S + V/t – rV = 0. Never heard of it. Something about economics and the 2008 financial crash. Nothing about it in IPMB.
Seventeen is a strange number of equations to select (a medium sized prime number). If I were to round it out to twenty, then I would have three to select on my own. My first thought is Newton’s second law, F=ma, but Stewart mentions that this relationship underlies both the Navier-Stokes equation and the wave equation, so I guess it is already present implicitly. Here are my three:
18. Exponential equation with constant input: dy/dt = a – by. Chapter 2 of IPMB (Exponential Growth and Decay) is dedicated to the exponential function. This equation appears over and over throughout the book. Stewart discusses the exponential function briefly in his chapter on logarithms, but I am inclined to add the differential equation leading to the exponential function to the list. Among its many uses, this function is crucial for understanding the decay of radioactive isotopes in Chapter 17 (Nuclear Physics and Nuclear Medicine).

19. Diffusion equation: ∂C/∂t = D ∂2C/∂x2. To his credit, Stewart introduces the diffusion equation in his chapter on the Fourier transform, and indeed it was Fourier’s study of the heat equation (the same as the diffusion equation, with T for temperature replacing C for concentration) that motivated the development of the Fourier series. Nevertheless, the diffusion equation is so central to biology, and discussed in such detail in Chapter 4 (Transport in an Infinite Medium) of IPMB, that I had to include it. Some may argue that if we include both the wave equation and the diffusion equation, we also should add Laplace’s equation, but I consider that a special case of Maxwell’s equations, so it is already in the list.

20. Light quanta: E = hν: Although Stewart included Schrodinger’s equation of quantum mechanics, I would include this second equation containing Planck’s constant h. It summarizes the wave-particle duality of light, and is crucially important in Chapters 14 (Atoms and Light), 15 (Interaction of Photons and Charged Particles with Matter), and 16 (Medical Uses of X Rays).
Runners up include the Bloch equations since I need something from Chapter 18 (Magnetic Resonance Imaging), the Boltzmann factor (except that it is a factor, not an equation), Stokes law, the ideal gas law and its analog the van’t Hoff’s law from Chapter 5 (Transport through Neutral Membranes), the Hodgkin and Huxley equations, the Poisson-Boltzmann equation in Chapter 9 (Electricity and Magnetism at the Cellular Level), the Poisson probability distribution, and Planck’s blackbody radiation law (perhaps in place of E=hν).

Overall, I think studying the 4th edition of Intermediate Physics for Medicine and Biology introduces the reader to most of the critical equations that have indeed changed the world.