Friday, February 28, 2014

The Encyclopedia of Life

Although I am a champion of applying physics to biomedicine, physics has little impact on some parts of biology. For instance, much of zoology and botany consist of the identification and naming of different species: taxonomy. Not too much physics there.

A giant in the field of taxonomy is the Sweedish scientist Carl Linnaeus (1707-1778). Linnaeus developed the modern binomial nomenclature to name organisms. Two names are given (often in Latin), genus then species, both italicized with the genus capitalized and the species not. For example, the readers of this blog are Homo sapiens: genus = Homo and species = sapiens. My dog Suki is a member of Canis lupus. Her case is complicated, since the domestic dog is a subspecies of the wolf, Canis lupus familiaris, but because dogs and wolves can interbreed they are considered the same species and to keep things simple (a physicist’s goal, if not a biologist’s) I will just use Canis lupus. Hodgkin and Huxley performed their experiments on the giant axon from the squid, whose binomial name is Loligo forbesi (as reported in Hodgkin and Huxley, J. Physiol., Volume 104, Pages 176–195, 1945; in their later papers they just mention the genus Loligo, and I am not sure what species they used--they might have used several). My daughter Katherine studied yeast when an undergraduate biology major at Vanderbilt University, and the most common yeast species used by biologists is Saccharomyces cerevisiae. The nematode Caenorhabditis elegans is widely used as a model organism when studying the nervous system. You will often see its name shortened to C. elegans (such abbreviations are common in the Linnaean system). Another popular model system is the egg of the frog species Xenopus laevis. The mouse, Mus musculus, is the most common mammal used in biomedical research. I’m not enough of a biologist to know how viruses, such as the tobacco mosaic virus, fit into the binomial nomenclature.

Out of curiosity, I wondered what binomial names Russ hobbie and I mentioned in the 4th edition of Intermediate Physics for Medicine and Biology. It is surprisingly difficult to say. I can’t just search my electronic version of the book, because what keyword would I search for? I skimmed through the text and found these four; there may be others. (Brownie points to any reader who can find one I missed and report it in the comments section of this blog.)
If you want to learn more about any of these species, I suggest going to the fabulous website EOL.org. The site states
The Encyclopedia of Life (EOL) began in 2007 with the bold idea to provide “a webpage for every species.” EOL brings together trusted information from resources across the world such as museums, learned societies, expert scientists, and others into one massive database and a single, easy-to-use online portal at EOL.org.

While the idea to create an online species database had existed prior to 2007, Dr. Edward O. Wilson's 2007 TED Prize speech was the catalyst for the EOL you see today. The site went live in February 2008 to international media attention. …

Today, the Encyclopedia of Life is expanding to become a global community of collaborators and contributors serving the general public, enthusiastic amateurs, educators, students and professional scientists from around the world.

Friday, February 21, 2014

Principles of Musical Acoustics

Principles of Musical Acoustics, by William Hartmann.
Principles of Musical Acoustics,
by William Hartmann.
In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I added a new chapter (Chapter 13) about Sound and Ultrasound. This allows us to discuss acoustics and hearing; an interesting mix of physics and physiology. But one aspect of sound we don’t analyze is music. Yet, there is much physics in music. In a previous blog post, I talked about Oliver Sacks’ book Musicophilia, a fascinating story about the neurophysiology of music. Unfortunately, there wasn’t a lot of physics in that work.

Last year, William Hartmann of Michigan State University (where my daughter Kathy is now a graduate student) published a book that provides the missing physics: Principles of Musical Acoustics. The Preface begins
Musical acoustics is a scientific discipline that attempts to put the entire range of human musical activity under the microscope of science. Because science seeks understanding, the goal of musical acoustics is nothing less than to understand how music “works,” physically and psychologically. Accordingly, musical acoustics is multidisciplinary. At a minimum it requires input from physics, physiology, psychology, and several engineering technologies involved in the creation and reproduction of musical sound.
My favorite chapters in Hartmann’s book are Chapter 13 on Pitch, and Chapter 14 on Localization of Sound. Chapter 13 begins
Pitch is the psychological sensation of the highness or the lowness of a tone. Pitch is the basis of melody in music and of emotion in speech. Without pitch, music would consist only of rhythm and loudness. Without pitch, speech would be monotonic—robotic. As human beings, we have astonishingly keen perception of pitch. The principal physical correlate of the psychological sensation of pitch is the physical property of frequency, and our keen perception of pitch allows us to make fine discriminations along a frequency scale. Between 100 and 10,000 Hz we can discriminate more than 2,000 different frequencies!
That is two thousand different pitches within a factor of one hundred in the range of frequencies (over six octaves), meaning we can perceive pitches that differ in frequency by about 0.23 %.  A semitone in music (for example, the difference between a C and a C-sharp) is about 5.9 %. That's pretty good: twenty-five pitches within one semitone. No wonder we have to hire piano tuners.

Pitch is perceived by “place,” different locations in the cochlea (part of the inner ear) respond to different frequencies, and by “timing,” neurons spike in synchrony with the frequency of the sound. For complex sounds, there is also a “template” theory, in which we learn to associate a collection of frequencies with a particular pitch. The perception of pitch is not a simple process.

There are some interesting differences between pitch perception in hearing and color perception in vision. For instance, on a piano play a middle C (262 Hz) and the next E (330 Hz) a factor of 1.25 higher in frequency. What you hear is not a pure tone, but a mixture of frequencies—a chord (albeit a simple one). But if you mix red light (450 THz) and green light (563 THz, again a factor of 1.25 higher in frequency), what you see is yellow, indistinguishable by eye from a single frequency of about 520 THz. I find it interesting and odd that the eye and ear differ so much in their ability to perceive mixtures of frequencies. I suspect it has something to do with the eye needing to be able to form an image, so it does not have the luxury of allocating different locations on the retina to different frequencies. One the other hand, the cochlea does not form images, so it can distribute the frequency response over space to improve pitch discrimination. I suppose if we wanted to form detailed acoustic images with our ear, we would have to give up music.

Hartmann continues, emphasizing that pitch perception is not just physics.
Attempts to build a purely mechanistic theory for pitch perception, like the place theory or the timing theory, frequently encounter problems that point up the advantages of less mechanistic theories, like the template theory. Often, pitch seems to depend on the listener’s interpretation.
Both Sacks and Hartmann discuss the phenomena of absolute, or perfect, pitch (AP). Hartmann offers this observation, which I find amazing, suggesting that we should be training our first graders in pitch recognition.
Less than 1% of the population has AP, and it does not seem possible for adults to learn AP. By contrast, most people with musical skills have RP [relative pitch], and RP can be learned at any time in life. AP is qualitatively different from RP. Because AP tends to run in families, especially musical families, it used to be thought that AP is an inherited characteristic. Most of the modern research, however, indicates that AP is an acquired characteristic, but that it can only be acquired during a brief critical interval in one’s life—a phenomenon known as “imprinting.” Ages 5–6 seem to be the most important.
My sister (who has perfect pitch) and I both started piano lessons in early grade school. I guess she took those lessons more seriously than I did.

In Chapter 14 Hartmann addresses another issue: localization of sound. It is complex, and depends on differences in timing and loudness between the two ears.
The ability to localize the source of a sound is important to the survival of human beings and other animals. Although we regard sound localization as a common, natural ability, it is actually rather complicated. It involves a number of different physical, psychological, and physiological, processes. The processes are different depending on where the sound happens to be with respect to the your head. We begin with sound localization in the horizontal plane.”
Interestingly, localization of sound gets more difficult when echos are present, which has implications for the design of concert halls. He writes
A potential problem occurs when sounds are heard in a room, where the walls and other surfaces in the room lead to reflections. Because each reflection from a surface acts like a new source of sound, the problem of locating a sound in a room has been compared to finding a candle in a dark room where all the walls are entirely covered with mirrors. Sounds come in from all directions and it’s not immediately evident which direction is the direction of the original source.

The way that the human brain copes with the problem of reflections is to perform a localization calculation that gives different weight to localization cues that arrive at different times. Great weight is placed on the information in the onset of the sound. This information arrives directly from the source before the reflections have a chance to get to the listener. The direct sound leads to localization cues such as ILD [interaural level difference], ITD [interaural time difference], and spectral cues that accurately indicate the source position. The brain gives much less weight to the localization cues that arrive later. It has learned that they give unreliable information about the source location. This weighting of localization cues, in favor of the earliest cues, is called the precedence effect.
The enjoyment of music is a truly complicated event, involving much physics and physiology. The Principles of Musical Acoustics is a great place to start learning about it.

Friday, February 14, 2014

Bacterial Decision Making

Medical and biological physics sometimes appear on the cover of Physics Today. For instance, this month (February, 2014) the cover shows E coli. The caption for the cover picture states
Escherichia coli bacteria have served for decades as the “hydrogen atom” of cellular decision making. In that branch of biology, researchers strive to understand the origin of cellular individuality and how a cell decides whether or not to express a particular gene in its DNA. For some of the physics involved, turn to the article by Jané Kondev on page 31.
The article begins with a description of Jacques Monod’s work with the lac operon: a stretch of DNA that regulates the lac genes responsible for lactose digestion. (This story is told in detail in Horace Freeland Judson’s masterpiece The Eighth Day of Creation.) Kondev writes
The key question I’ll address in this article is, What is the molecular basis by which a cell decides to switch a gene on? Although all the cells in figure 1b are genetically identical and experience the same environment, only one appears to be making the protein. As we’ll see, that cellular individuality is a direct consequence of molecular noise that accompanies cellular decision making. The sources of the noise and its biological consequences are currently a hot topic of research. And statistical physics is proving to be an indispensable tool for producing mathematical models capable of explaining data from experiments that look at decisions made by individual cells.
The caption of Fig. 1b reads
In the presence of a lactose surrogate, individual cells can switch from a state in which they are unable to digest lactose to a state in which they are able to consume the secondary sugar. Yellow indicates the amount of a fluorescently labeled protein, lactose permease, which is one of the enzymes needed by the cell to digest lactose.
The article then draws on several physics concepts that Russ Hobbie and I discuss in the 4th edition of Intermediate Physics for Medicine and Biology: the Boltzmann factor, the Gibbs free energy, the Poisson probability distribution, and feedback. The last of these concepts is crucial.
Thanks to that positive feedback, E. coli cells exist in two different steady states—one in which there are many permeases in the cell (the yellow cell in figure 1b), the other in which the number of permeases is low (the dark cells in 1b). Stochastic fluctuations in the expression of the lac genes—fluctuations, for instance, between an on and an off state of the promoter—can flip the switch and turn a lactose noneater to a lactose eater.
The article concludes
Physics-based models are leading to more stringent tests of the molecular mechanisms responsible for gene expression than those provided by the qualitative model presented in biology textbooks. They also pave the way for the design of so-called synthetic genetic circuits, in which the proteins produced by the expression of one gene affect the expression of another. Such circuits hold the promise of bacterial cells capable of producing useful chemicals or combating diseased human cells, including cancerous cells. Whether this foray of physics into biology will lead to fundamentally new biological insights about gene expression remains to be seen.
Kondev’s review offers us one more example of the importance of physics in biology and medicine. And for those of you who think E. coli bacteria is not an appropriate topic for a Valentine’s Day blog post, I say bah humbug.

Friday, February 7, 2014

Distances and Sizes

One of the additions that Russ Hobbie and I made to the 4th edition of Intermediate Physics for Medicine and Biology is an initial section in Chapter 1 about Distances and Sizes.
In biology and medicine, we study objects that span a wide range of sizes: from giant redwood trees to individual molecules. Therefore, we begin with a brief discussion of length scales.
The Machinery of Life,  by David Goodsell, superimposed on Intermediate Physics for Medicine and Biology.
The Machinery of Life,
by David Goodsell.
We then present two illustrations. Figure 1.1 shows objects from a few microns to a few hundred microns in size, including a paramecium, an alveolus, a cardiac cell, red blood cells, and E. coli. Figure 1.2 contains objects from a few to a few hundred nanometers, including HIV, hemoglobin, a cell membrane, DNA, and glucose. Many interesting and important biological structures were left out of these figures.

I admit that our figures are not nearly as well drawn as, say, David Goodsell’s artwork in The Machinery of Life. But, I enjoy creating such drawings, even if I am artistically challenged. So, below are two new illustrations, patterned after Figs. 1.1 and 1.2. Think of them as supplementary figures for readers of this blog.


FIGURE 1.1½. Objects ranging in size from 1 mm down to 1 μm. (a) Human hair, (b) human egg, or ovum, (c) sperm, (d) large myelinated nerve axon, (e) skeletal muscle fiber, (f) capillary, (g) yeast, and (h) mitochondria.
FIGURE 1.1½. Objects ranging in size from 1 mm down to 1 μm.
(a) Human hair, (b) human egg, or ovum, (c) sperm,
(d) large myelinated nerve axon, (e) skeletal muscle fiber,
(f) capillary, (g) yeast, and (h) mitochondria.
FIGURE 1.2½. Objects ranging in size from 1 μm down to 1 nm. (a) Ribosomes, (b) nucleosomes, (c) tobacco mosaic virus, (d) antibodies, and (e) ATP.
FIGURE 1.2½. Objects ranging in size from 1 μm down to 1 nm.
(a) Ribosomes, (b) nucleosomes, (c) tobacco mosaic virus,
(d) antibodies, and (e) ATP.
Powers of Ten, superimposed on Intermeidate Physics for Medicine and Biology.
Powers of Ten.
When you combine these figures with those in IPMB, you get a nice overview of the important biological objects at these spatial scales. Two things you do not get are a sense of their dynamic behavior (e.g., Brownian motion) at the microscopic scale, and an appreciation for the atomic nature of all objects (you could not detect single atoms in Fig. 1.2½, but they lurk just below the surface; ATP consists of just 47 atoms).

If you like this sort of thing, you will love browsing through The Machinery of Life or Powers of Ten.

Friday, January 31, 2014

The Feynman Lectures on Physics: New Millennium Edition

A screenshot of www.feynmanlectures.info.
www.feynmanlectures.info.
Several years ago in this blog, I discussed The Feynman Lectures on Physics. Russ Hobbie and I cite The Feynman Lectures in Chapter 11 of the 4th edition of Intermediate Physics for Medicine and Biology. Recently, a new millennium edition of the Feynman Lectures has been produced and it is fully online: http://www.feynmanlectures.info. If you are reading this blog, you can read The Feynman Lectures, free and open to all. The preface to the millennium edition states
Nearly fifty years have passed since Richard Feynman taught the introductory physics course at Caltech that gave rise to these three volumes, The Feynman Lectures on Physics. In those fifty years our understanding of the physical world has changed greatly, but The Feynman Lectures on Physics has endured. Feynman's lectures are as powerful today as when first published, thanks to Feynman's unique physics insights and pedagogy. They have been studied worldwide by novices and mature physicists alike; they have been translated into at least a dozen languages with more than 1.5 millions copies printed in the English language alone. Perhaps no other set of physics books has had such wide impact, for so long.
This New Millennium Edition ushers in a new era for The Feynman Lectures on Physics (FLP): the twenty-first century era of electronic publishing. FLP has been converted to eFLP, with the text and equations expressed in the LaTeX electronic typesetting language, and all figures redone using modern drawing software.
The consequences for the print version of this edition are not startling; it looks almost the same as the original red books that physics students have known and loved for decades. The main differences are an expanded and improved index, the correction of 885 errata found by readers over the five years since the first printing of the previous edition, and the ease of correcting errata that future readers may find. To this I shall return below.
The eBook Version of this edition, and the Enhanced Electronic Version are electronic innovations. By contrast with most eBook versions of 20th century technical books, whose equations, figures and sometimes even text become pixellated when one tries to enlarge them, the LaTeX manuscript of the New Millennium Edition makes it possible to create eBooks of the highest quality, in which all features on the page (except photographs) can be enlarged without bound and retain their precise shapes and sharpness. And the Enhanced Electronic Version, with its audio and blackboard photos from Feynman's original lectures, and its links to other resources, is an innovation that would have given Feynman great pleasure.”
All three volumes of this classic text are online. There is a lot of extra stuff too, like an errata for each edition, exercises with solutions, stories from many physicists about how The Feynman Lectures influenced their careers, original course handouts, and related links. And did I mention it is available free and open to all?

Enjoy!

Friday, January 24, 2014

Drosophila melanogaster

A photograph of a A plush toy based on Drosophila melanogaster.
A plush toy of Drosophila melanogaster.
In Chapter 9 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss how the patch clamp technique combined with genetics methods can be used to answer scientific questions. One example we consider is the potassium channel in the fruit fly.
Gene splicing combined with patch-clamp recording provided a wealth of information. Regions of the DNA responsible for synthesizing the membrane channel have been identified. One example that has been extensively studied is a potassium channel from the fruit fly, Drosophila melanogaster. The Shaker fruit fly mutant shakes its legs under anesthesia. It was possible to identify exactly the portion of the fly’s DNA responsible for the mutation. When Shaker DNA was placed in other cells that do not normally have potassium channels, they immediately made functioning channels.
The Eighth Day of Creation: The Makers of the Revolution in Biology, by Horace Freeland Judson, superimposed on Intermediate Physics for Medicine and BIology.
The Eighth Day of Creation:
The Makers of the Revolution in Biology,
by Horace Freeland Judson.
So what is Drosophila melanogaster, and why is it significant? Horace Freeland Judson describes this famous model system in his masterpiece The Eighth Day of Creation: The Makers of the Revolution in Biology. In his Chapter 4, On T. H. Morgan’s Deviation and the Secret of Life, Judson writes
Thinking of T. H. Morgan, one thinks first, or should, of the common vinegar fly, Drosophila, whose mutants and hybrids and their multitudinous descendants he examined for red eyes and eosin eyes and white eyes, vestigial wings or wild-type, and so on, and which he kept as best he could in hundreds of milk bottles stoppered with cotton wool. With Drosophila, Morgan discovered, for example, the mechanism by which sex is determined, at the instant of the egg’s fertilization, by the pairing of the sex chromosomes, either XX or XY, and the consequent phenomenon of sex-linked inheritance that explains, as we all also know, the appearance of disorders like hemophilia among the male descendants of Queen Victoria. And when Morgan and a student of his, Alfred Henry Sturtevant, perceived that the statistical evidence for linkage of many genes on one chromosome could be extended to map their relative distance one from another along that chromosome, then the hereditary material became palpably a string of beads, a line of points, each controlling a character of the organism.
The Wellsprings of Life, by Isaac Asimov, superimposed on Intermediate Physics for Medicine and Biology.
The Wellsprings of Life,
by Isaac Asimov.
In The Wellsprings of Life, Isaac Asimov describes the same experiments.
What was needed [to understand genetics] was a simpler type of organism [compared to humans]; one that was small and with few needs, so that it might easily be kept in quantity; one that bred frequently and copiously; and one that had cells with but a few chromosomes. An organism which met all these needs ideally was first used in 1906 by the American zoologist Thomas Hunt Morgan. This was the common fruit fly, of which the scientific name is the much more formidable Drosophila melanogaster (“the black-bellied moisture-lover”). These are tiny things, only about one twenty-fifth of an inch long, and can be kept in bottles with virtually no trouble. They can breed every two weeks, laying numerous eggs each time. Their cells have only eight chromosomes apiece (with four in the gametes).

More genetic experiments have been conducted with Drosophila in the past half-century [Asimov was writing in 1960] than with any other organism, and Morgan received the Nobel prize in medicine and physiology in 1933 for the work he did with the little insect. Enough work was done with other organisms, from germs to mammals, to show that the results obtained from Drosophila studies are quite general, applying to all species.
If you want to learn more about Drosophila, I suggest the article “Drosophila melanogaster: A Fly Through its History and Current Use” by Stephenson and Metcalfe (Journal of the Royal College of Physicians of Edinburgh, Volume 43, Pages 70–75, 2013). For those who prefer video, here is a great introduction to Drosophila from the Journal of Visualized Experiments. Finally, for our 5-year-old readers (or the young at heart), you can purchase a Drosophila melanogaster plush toy here for just ten dollars.

Friday, January 17, 2014

George Ralph Mines, Biological Physicist

In Chapter 10 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the contribution of George Ralph Mines to cardiac electrophysiology.
The propagation of an action potential is one example of the propagation of a wave in excitable media. We saw in Chap. 7 that waves of depolarization sweep through cardiac tissue. The circulation of a wave of contraction in a ring of cardiac tissue was demonstrated by Mines in 1914. It was first thought that such a wave had to circulate around an anatomic obstacle, but it is now recognized that no obstacle is needed.
This year marks the 100th anniversary of Mines’ landmark work. Regis DeSilva, in an article titled “George Ralph Mines, Ventricular Fibrillation and the Discovery of the Vulnerable Period” (Journal of the American College of Cardiology, Volume 29, Pages 1397–1402, 1997) describes Mines’ work in more detail.
George Ralph Mines … made two major contributions to electrophysiology. His scientific legacy includes elucidating the theoretical basis for the occurrence of reentrant arrhythmias and the discovery of the vulnerable period of the ventricle.
First, DeSilva discusses Mines’ analysis of reentry in cardiac tissue.
Mines applied his concept of reentry to myocardial tissue and suggested that closed circuits may also exist within heart muscle. Under normal conditions, these circuits are uniformly excited, and an excitatory wave dies out. He suggested that the twin conditions of unidirectional block and slow conduction may occur in abnormal myocardial tissue. Thus, tissue in a reentrant circuit may allow a circulating wavefront to be sustained by virtue of conductive tissue being always available for excitation. In this paper, he also published a now classic figure by illustrating the concept of circus movement in such small myocardial circuits, and this diagram is still used unchanged today in teaching this mechanism to students of electrocardiography (14)
Reference 14 (Mines GR, “On Dynamic Equilibrium in the Heart, Journal of Physiology,” Volume 46, Pages 349–383, 1913) is not cited in IPMB.

DeSilva then addresses Mines identification of a “vulnerable period” in the heart.
Mines’ second major contribution was also his most important discovery. It was published … in 1914, entitled “On Circulating Excitations in Heart Muscles and Their Possible Relation to Tachycardia and Fibrillation” [Transactions of the Royal Society of Canada, Volume 8, Pages 43–52, 1914] (15).…[Before 1914] the most common method of inducing fibrillation was by the application of repeated electrical shocks to the heart through an induction coil. Mines’ innovation in studying the onset of fibrillation was to modify the method by applying single shocks to the rabbit heart, and by timing them precisely at various periods during the cardiac cycle.… Stimuli were delivered by single taps of a Morse key, and the moment of application of the stimulus was signaled by the use of a sparking coil connected to an insulated pointer that produced dots on the kymographic trace. Correlation of the position of the dots on the mechanical trace with the electrocardiogram provided an indication of its timing in electrical diastole…. By so doing, ‘it was found in a number of experiments that a single tap of the Morse key if properly timed [his italics] would start fibrillation which would persist for a time. . . . The point of interest is that the stimulus employed would never cause fibrillation unless it was set in at a certain critical instant’ (15)…. The importance of this work lies in the fact that Mines identified for the first time a narrow zone fixed within electrical diastole during which the heart was extremely vulnerable to fibrillation. An external stimulus, or a stimulus generated from within the heart, if properly timed to fall within this zone, could trigger a fatal arrhythmia and cause death. This observation has spurred three generations of scientists to study the factors which cause death by disruption of what Mines called 'the dynamic equilibrium of the heart' (14).
Clearly Mines made landmark contributions to our understanding of the heart. But perhaps the most intriguing aspect of Mines’ life was the unusual circumstances of his untimely death. DeSilva writes
On the evening of Saturday November 7, 1914, the night janitor entered Mines’ laboratory and found him lying unconscious with equipment attached, apparently for the recording of respiration (25). He was taken immediately to the Royal Victoria Hospital where he regained consciousness only briefly. Shortly before midnight, he developed seizures and died without regaining consciousness. A complete autopsy was performed, including examination of all the abdominal and thoracic viscera and the brain, but no final diagnosis was rendered (26). The presumption was that death resulted from self-experimentation.
Here is how Art Winfree describes the same event, in his Scientific American article “Sudden Cardiac Death: A Problem in Topology”
Mines had been trying to determine whether relatively small, brief electrical stimuli can cause fibrillation. For this work he had constructed a device to deliver electrical impulses to the heart with a magnitude and timing that could be precisely controlled. The device had been employed in preliminary work with animals. When Mines decided it was time to begin work with human beings, he chose the most readily available experimental subject: himself. At about six o’clock that evening a janitor, thinking it was unusually quiet in the laboratory, entered the room. Mines was lying under the laboratory bench surrounded by twisted electrical equipment. A broken mechanism was attached to his chest over the heart and a piece of apparatus nearby was still recording the faltering heartbeat. He died without recovering consciousness.
Winfree notes in the 2nd edition of his book The Geometry of Biological Time that there is still some controversy about if Mines' death was truly from self experimentation. The circumstances of his death are certainly suggestive of this, even if we lack definitive proof.

I can't help but notice the similarities between George Ralph Mines and Henry Moseley. Both were Englishmen whose last name started with "M". Both were born at about the same time (Mines in 1886, Moseley in 1887). Both made fundamental contributions to science at an early age (Mines to cardiac electrophysiology, and Moseley to our understanding of the atomic number and the periodic table). Both are probably underappreciated in the history of science, and neither won the Nobel Prize. And both died before reaching the age of 30 (Mines in 1914, Moseley in 1915). Mines died in the mysterious accident in his lab described above, and Moseley died in the Battle of Gallipoli during World War I. And, of course, both are mentioned in the 4th edition of Intermediate Physics for Medicine and Biology.

Friday, January 10, 2014

Happy Birthday, Earl Bakken!

Today, Earl Bakken turns 90 years old. Bakken is the founder of the medical device company Medtronic, and he played a key role in the development of the artificial pacemaker. I had the good fortune to meet Bakken in 2009 at a reception in the Bakken Museum as part of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society held in Minneapolis.

Machines in our Hearts, by Kirk Jeffrey, superimposed on Intermediate Physics for Medicine and Biology.
Machines in our Hearts,
by Kirk Jeffrey.
Kirk Jeffrey’s book Machines in our Hearts tells the story of how Bakken, at the request of the renowned heart surgeon C. Walton Lillehei, developed the first battery powered pacemaker.
Bakken first thought of an “automobile battery with an inverter to convert the six volts to 115 volts to run the AC pacemaker on its wheeled stand. That, however, seemed like an awfully inefficient say to do the job, since we needed only a 10-volt direct-current pulse to stimulate the heart.” Powering the stimulator from a car battery would have eliminated the need for electrical cords and plugs, but would not have done away with the wheeled cart. Bakken then realized that he could simply build a stimulator that used transistors and small batteries. “It was kind of an interesting point in history,” he recalled—“a joining of several technologies.” In constructing the external pulse generator, Bakken borrowed a circuit design for a metronome that he had noticed a few months earlier in an electronics magazine for hobbyists. It included two transistors. Invented a decade earlier, the transistor was just beginning to spread into general use in the mid-1950s. Hardly anyone had explored its applications in medical devices. Bakken used a nine-volt battery, housed the assemblage in an aluminum circuit box, and provided an on-off switch and control knobs for stimulus rate and amplitude.

At the electronics repair shop that he had founded with his brother-in-law in 1949, Bakken had customized many instruments for researchers at the University of Minnesota Medical School and the nearby campus of the College of Agriculture. Investigators often “wanted special attachments or special amplifiers” added to some of the standard recording and measuring equipment. “So we began to manufacture special components to go with the recording equipment. And that led us into just doing specials of many kinds…We developed….animal respirators, semen impedance meters for the farm campus, just a whole spectrum of devices.” Usually the business would sell a few of these items. When Bakken delivered the battery-powered external pulse generator to Walt Lillehei in January 1958, it seemed to the inventor another special order, nothing more. The pulse generator was hardly an aesthetic triumph, but it was small enough to hold in the hand and severed all connection between the patient’s heart and the hospital power system. Bakken’s business had no animal-testing facility, so he assumed that the surgeons would test the device by pacing laboratory dogs. They did “a few dogs,” then Lillehei put the pacemaker into clinical use. When Bakken next visited the university, he was surprised to find that his crude prototype was managing the heartbeat of a child recovering from open-heart surgery.
Russ Hobbie and I discuss the artificial pacemaker in Chapter 7 of the 4th edition of Intermediate Physics for Medicine and Biology
Cardiac pacemakers are a useful treatment for certain heart diseases [Jeffrey (2001), Moses et al. (2000); Barold (1985)]. The most frequent are an abnormally slow pulse rate (bradycardia) associated with symptoms such as dizziness, fainting (syncope), or heart failure. These may arise from a problem with the SA node (sick sinus syndrome) or with the conduction system (heart block). One of the first uses of pacemakers was to treat complete or “third degree” heart block. The SA node and the atria fire at a normal rate but the wave front cannot pass into the conduction system. The AV node or some other part of the conduction system then begins firing and driving the ventricles at its own, pathologically slower rate. Such behavior is evident in the ECG in Fig. 7.30, in which the timing of the QRS complex from the ventricles is unrelated to the P wave from the atria. A pacemaker stimulating the ventricles can be used to restore a normal ventricular rate.
You can learn more about Bakken’s contributions to the development of the pacemaker here or on video here. Visit his website here or read his autobiography. He now lives in Hawaii, where local magazines have reported about him here and here. Those wanting to join the celebration can attend Earl Bakken’s birthday bash at the Bakken Museum, or celebrate at the North Hawaii Community Hospital.

Happy birthday, Earl Bakken!

Friday, January 3, 2014

Integrals of Sines and Cosines

Last week in this blog, I discussed the Fourier series. This week, I want to highlight some remarkable mathematical formulas that make the Fourier series work: integrals of sines and cosines. The products of sine and cosines obey these relationships:
Integrals of products of sines and cosines.
where n and m are integers. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I dedicate Appendix E to studying these integrals. They allow some very complicated expressions involving infinite sums to reduce to elegantly simple equations for the Fourier coefficients. Whenever I’m teaching Fourier series, I go through the derivation up to the point where these integrals are needed, and then say “and now the magic happens!”

The collection of sines and cosines (sinmx, cosnx) are an example of an orthogonal set of functions. How do you prove orthogonality? One can derive it using the trigonometric product-to-sum formulas.
Several trigonometric product-to-sum formulas.
I prefer to show that these integrals are zero for some special cases, and then generalize. Russ and I do just that in Figure E2. When we plot (a) sinx sin2x and (b) sinx cosx over the range 0 to 2π, it becomes clear that these integrals are zero. We write “each integrand has equal positive and negative contributions to the total integral,” which is obvious by merely inspecting Fig. E2. Is this a special case? No. To see a few more examples, I suggest plotting the following functions between 0 and 2π:
Product of trigonometric functions to plot.
In each case, you will see the positive and negative regions cancel pairwise. It really is amazing. But don’t take my word for it, as you’ll miss out on all the fun. Try it.

Nearly as amazing is what happens when you analyze the case for m = n by integrating cosnx cosnx = cos2nx or sinnx sinnx=sin2nx. Now the integrand is a square, so it always must be positive. These integrals don’t vanish (although the “mixed” integral cosnx sinnx does go to zero). How do I remember the value of this integral? Just recall that the average value of either cos2nx or sin2nx is ½. As long as you integrate over an integral number of periods, the result is just π.

When examining non-periodic functions, one integrates over all x, rather than from merely zero to 2π. In this case, Russ and I show in Sec. 11.10 that you get delta function relationships such as
An integral representation of the delta function.
I won’t ask you to plot the integrand over x, because since x goes from negative infinity to infinity it might take you a long time.

The integrals of products of sines and cosines is one example of how Russ and I use appendicies to examine important mathematical results that might distract the reader from the main topic (in this case, Fourier series and its application to imaging), but are nevertheless important.

Friday, December 27, 2013

Fourier Series

One of the most important mathematical techniques for a physicist is the Fourier series. I discussed Joseph Fourier, the inventor of this method, previously in this blog. In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Fourier series in Sections 11.4 and 11.5.

The classic example of a Fourier series is the representation of a periodic square wave: y(t) = 1 for t between 0 and T/2, and y(t) = -1 for t between T/2 and T, where T is the period. The Fourier series represents this function as a sum of sines and cosines, with frequencies of k/T, where k is an integer, k = 0, 1, 2, …. The square wave function y(t) is odd, so the contributions of the cosine functions vanish. The sine functions contribute for half the frequencies, those with odd values of k. The amplitude of each non-zero frequency is 4/πk (Eq. 11.34 in IPMB), so the very high frequency terms (large k) don’t contribute much.

Being able to calculate the Fourier series is nice, but much more important is being able to visualize it. When I teach my Medical Physics class (PHY 326), based on the last half of IPMB, I stress that students should “think before you calculate.” One ought to be able to predict qualitatively the Fourier coefficients by inspection. Being able to understand a mathematical calculation in pictures and in physical terms is crucially important for a physicist. The Wikipedia article about a square wave has a nice animation of the square wave being built up by adding more and more frequencies to the series. I always insist that students draw figures showing better and better approximations to a function as more terms are added, at least for the first three non-zero Fourier components. You can also find a nice discussion of the square wave at the Wolfram website. However, the best visualization of the Fourier series that I have seen was brought to my attention by one of the PHY 326 students, Melvin Kucway. He found this lovely site, which shows the different Fourier components as little spinning wheels attached to wheels attached to wheels, each with the correct radius and spinning frequency so that their sum traces out the square wave. Watch this animation carefully. Notice how the larger wheels rotate at a lower frequency, while the smaller wheels spin around at higher frequencies. This picture reminds me of the pre-Copernican view of the rotation of planets based on epicycles proposed by Ptolemy.

What is unique about the development of Fourier series in IPMB? Our approach, which I rarely, if ever, see elsewhere, is to derive the Fourier coefficients using a least-squares approach. This may not be the simplest or most elegant route to the coefficients, but in my opinion it is the most intuitive. Also, we emphasize the Fourier series written in terms of sines and cosines, rather than complex exponentials. Why? Understanding Fourier series on an intuitive level is hard enough with trigonometric functions; it becomes harder still when you add in complex numbers. I admit, the math appears in a more compact expression using complex exponentials, but for me it is more difficult to visualize.

If you want a nice introduction to Fourier series, click here or here (in the second site, scroll down to the bottom on the left). If you prefer listening to reading, click here for an MIT Open Courseware lecture about the Fourier series. The two subsequent lectures are also useful: see here and here. The last of these lectures examines the square wave specifically.

One of the fascinating things about the Fourier representation of the square wave is the Gibbs phenomenon. But, I have discussed that in the blog before, so I won’t repeat myself.

What is the Fourier series used for? In IPMB, the main application is in medical imaging. In particular, computed tomography (Chapter 12) and magnetic resonance imaging (Chapter 18) are both difficult to understand quantitatively without using Fourier methods.

As a new year’s resolution, I suggest you master the Fourier series, with a focus on understanding it on a graphical and intuitive level. What is my new year’s resolution for 2014? It is for Russ and I to finish and submit the 5th edition of IPMB to our publishers. With luck, you will be able to purchase a copy before the end of 2015.