Friday, September 13, 2013

Plain Words

Plain Words, by Sir Ernest Gowers, superimposed on Intermediate Physics for Medicine and Biology.
Plain Words,
by Sir Ernest Gowers.
When I arrived at graduate school, the main goal given to me by my advisor John Wikswo was to write scientific papers. Of course, I had to write a PhD dissertation, but that was in the distant future. The immediate job was to publish journal articles. John is a good writer, and he insists his students write well. So he recommended that I read the book Plain Words, by Sir Ernest Gowers. (I can’t recall if he made this suggestion before or after reading my first draft of a paper!) I dutifully read the book, which I have come to love. I believe I read the 1973 revision by Bruce Fraser although I am not sure; I borrowed Wikswo’s copy.

Gowers is an advocate for writing simply and clearly. He states in the introduction
Here we come to the most important part of our subject. Correctness is not enough. The words used may all be words approved by the dictionary and used in their right senses; the grammar may be faultless and the idiom above reproach. Yet what is written may still fail to convey a ready and precise meaning to the reader. That it does so fail is the charge brought against much of what is written nowadays, including much of what is written by officials. In the first chapter I quoted a saying of Matthew Arnold that the secret of style was to have something to say and to say it as clearly as you can. The basic fault of present-day writing is a tendency to say what one has to say in as complicated a way as possible. Instead of being simple, terse and direct, it is stilted, long-winded and circumlocutory; instead of choosing the simple word it prefers the unusual.
I have become a strong advocate for using plain language in scientific writing. Over the last three decades I have reviewed hundreds of papers for scientific journals, and I can attest that many scientists should read Plain Words. I have tried to use plain, clear language in the 4th edition of Intermediate Physics for Medicine and Biology (although Russ Hobbie’s writing was quite good in earlier editions of IPMB, which I had nothing to do with, so the book didn’t need much editing by me). Below, Gowers describes three rules for writing, which apply as well to scientific writing as to the official government writing that he focused on.
What we are concerned with is not a quest for a literary style as an end in itself, but to study how best to convey our meaning without ambiguity and without giving unnecessary trouble to our readers. This being our aim, the essence of the advice of both these authorities [mentioned earlier] may be expressed in the following three rules, and the rest of what I have to say in the domain of the vocabulary will be little more than an elaboration of them.
- Use no more words than are necessary to express your meaning. For if you use more you are likely to obscure it and to tire your reader. In particular do not use superfluous adjectives and adverbs and do not use roundabout phrases where single words would serve.
- Use familiar words rather than the far-fetched, for the familiar are more likely to be readily understood.
- Use words with a precise meaning rather than those that are vague, for they will obviously serve better to make your meaning clear; and in particular prefer concrete words to abstract, for they are more likely to have a precise meaning.
For me, the chore of writing is made easier because I like to write. Really, why else would I write this blog each week if I didn’t enjoy the craft of writing (certainly increased book sales can’t justify the time and effort). When my children were young, I once became secretary of their elementary school’s Parent-Teacher Association mainly because my primary duty would be writing the minutes of the PTA meetings. If you were to ask my graduate students, I think they would complain that I make too many changes to drafts of their papers, and we tend to go through too many iterations before submission to a journal. I can usually tell when we are close to a finished paper, because I find myself putting in commas in one draft, and then taking them out in the next. One trick Wikswo taught me is to read the text out loud, listening to the cadence and tone. I find this helpful, and I don’t care what people think when they walk by and hear me reading to myself in my office.

Most Americans have an advantage in the world of science. Modern science is primarily performed and published in the English language, which is our native tongue. I feel sorry for those who must submit articles written in an unfamiliar language—it really is unfair—but that has not stopped me from criticizing their English mercilessly in anonymous reviews. For any young scientist who may be reading this blog (and I do hope there are some of you out there), my advice is: learn to write. As a scientist, you will be judged on your written documents: your papers, your reports, and above all your grant proposals. You simply cannot afford to have these poorly written.

I believe role models are important in writing. One of mine is Isaac Asimov. While I enjoy his fiction, I use his science writing as an example of how to explain difficult concepts clearly. I was very lucky to have encountered his books when in high school. A second role model is not a science writer at all. I have read Winston Churchill’s books, especially his history of the second world war, and I find his writing both clear and elegant. A third model is physicist David Mermin. His textbook Solid State Physics is quite well written, and you can read his essay on writing physics here. You will find learning to write scientific papers difficult if all you read are other scientific papers, because the majority are not well written. If you pattern your own writing after them you will be aiming at the wrong target. Please, learn to write well.

You can read Plain Words online (and for free) here.

This week’s blog entry seems rather long and rambling. Let me conclude with a paraphrase of Mark Twain’s famous quip about letter writing: If I had more time, I would have written a shorter blog entry.

Friday, September 6, 2013

The Art of Electronics

The Art of Electronics, by Horowitz and Hill, superimposed on Intermediate Physics for Medicine and Biology.
The Art of Electronics,
by Horowitz and Hill.
A biological physicist needs many skills, and an important one for experimentalists is electronics. In graduate school, I began my career as an experimentalist, and my PhD advisor John Wikswo required all his students to design and build at least one piece of electronics. My job was to make a timer for our microelectrode puller. I wasn’t experienced with circuit design, so at Wikswo’s suggestion I turned to The Art of Electronics, by Paul Horowitz and Winfield Hill. This wonderful book taught me almost all I know about the subject (OK, that’s not saying much). I used the first edition, but in 1989 a second edition came out. Below is the preface from edition two.
Electronics, perhaps more than any other field of technology, has enjoyed an explosive development in the last four decades. Thus it was with some trepidation that we attempted, in 1980, to bring out a definitive volume teaching the art of the subject. By “art” we meant the kind of mastery that comes from an intimate familiarity with real circuits, actual devices, and the like, rather than the more abstract approach often favored in textbooks on electronics.
The Art of Electronics is particularly useful for understanding active circuits, such as those including transistors and operational amplifiers. I recall that in graduate school my education had a conspicuous hole in that I didn’t understand transistors, and The Art of Electronics helped me learn about them in an intuitive way (I still recall fondly Horowitz and Hill’s “transistor man”).

Russ Hobbie and I don’t discuss electronics explicitly in the fourth edition of Intermediate Physics for Medicine and Biology, but it is implicit in some chapters. For instance, thin film transistor arrays are discussed briefly in Chapter 16, used for detecting x-ray images. In Chapter 6, Figure 6.32 shows the apparatus for making voltage-clamp measurements. The “controller” in that figure is basically an op-amp, and in order to understand how it works one needs to appreciate their “golden rules”: 1) the output does whatever is necessary to make the voltage difference between the inputs zero, and 2) the inputs draw no current. You can do a lot with an op amp, including simple circuits such as a voltage follower (which is needed if you want to record a voltage using a large input impedance, something that is important in bioelectric recordings), simple amplifiers, integrators and differentiators. Horowitz and Hill describe all these circuits and more, in a way that can be understood by the beginner. For me, The Art of Electronics is to electronic circuits what Numerical Recipes is to computational methods: a well-written book that lets you learn the essence of the subject and the practical applications, without getting bogged down in all the esoteric details.

My timer for our microelectrode puller worked, although it wasn’t pretty. As I recall, it was built using leftover parts, and looked something like a big toaster with gigantic, 1950s-style knobs. But it allowed me to pull glass microelectrodes with a reproducible resistance to use in intracellular measurements of voltage in nerve axons. My experimental work culminated in the first simultaneous measurement of the transmembrane potential and magnetic field of a nerve axon (see Barach, Roth, and Wikswo, IEEE Trans. Biomed. Eng., Volume 32, Pages 136–140, 1985; and Roth and Wikswo, Biophys. J., Volume 48, Pages 93–109, 1985). The Biophysical Journal paper is one of my favorites, and represents the high water mark of my experimental career. However, I also like the less-cited IEEE TBME paper for two reasons: it was my very first journal article (appearing in February of 1985, whereas the Biophysical Journal paper appeared in July), and it is my only paper in which I supplied the experimental data and someone else (in this case, Prof. John Barach) performed the theoretical analysis. However, it soon became apparent that my talents and interests were more in mathematical modeling and computer simulation. Nevertheless, I have always had enormous respect for experimental work, which in my view is more difficult than theoretical analysis. I have suffered from a case of “experimentalist envy” since those formative years in graduate school.

Rumor has it that a 3rd edition of The Art of Electronics will appear soon.

Friday, August 30, 2013

The Ascent of Sap in Trees

In Chapter 1 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I included a homework problem about moving water up trees.
Problem 34 Sap flows up a tree at a speed of about 1 mm s−1 through its vascular system (xylem), which consists of cylindrical pores of 20 μm radius. Assume the viscosity of sap is the same as the viscosity of water. What pressure difference between the bottom and top of a 100 m tall tree is needed to generate this flow? How does it compare to the hydrostatic pressure difference caused by gravity?
When you calculate the pressure needed to push water (that is, sap) up the tree through the xylem, you get (Spoiler Alert!) twenty atmospheres to overcome the viscous resistance of the pores, and ten atmospheres to overcome gravity. How does the tree generate all this pressure? That is a famous old problem known as the “ascent of sap.”

Now I admit that a 100-meter tree is, indeed, very tall; taller than even the Statue of Liberty. But it is not an unrealistic example. The majestic sequoias in California reach this height. The tallest known tree, named Hyperion, is a sequoia (coast redwood) in northern California’s Redwood National and State Parks that reaches a height of 115 m. The leaves at the top of that redwood need water to carry out photosynthesis. How do they get it?

First, let us consider some mechanisms that do not work. The tree cannot suck the water up, as if it were a gigantic drinking straw. Even if the tree could produce a true vacuum at its peak it could only create a pressure difference of one atmosphere, which corresponds to a rise of water of 10 m. Another idea is that the water rises by capillary action, like a giant wick. But the height that can be reached by climbing up a tube via surface tension is inversely proportional to the tube radius, and for xylem’s 20 micron radius tubes water will rise only a tiny fraction of the tree’s height (in Sec. 12.2 of his book Air and Water, Mark Denny estimates that water would rise in xylem by capillary action to only a height of three-fourths of a meter). Osmotic pressure won’t work either, for any realistic concentration gradient. So what is the answer?

There is still some controversy, but the generally accepted mechanism for the ascent of sap is called the cohesion-tension theory. In the leaves, capillary action through very tiny channels helps pull water upwards to replace that which evaporates from the leaf surface. In the larger pores of the xylem, the water is pulled by tension (negative pressure), somewhat like a steel cable pulling an elevator up its shaft. But can water support such a tension? It can, but there is a problem. If any air is present, the system will fail. Think of a piston filled half with water and half air. If you pull on the piston, you will just expand the air as its pressure is reduced. Now, consider the piston with only 1/4th air and 3/4th water; the air still expands when you pull. In fact, if there is even one bubble present in the water, pulling on the piston will cause it to expand. Only if the piston contains no air at all will the water be able to exert a tension force. In other words, water under negative pressure is susceptible to cavitation; the formation of bubbles. Fortunately, the structure of xylem is such that bubbles cannot grow indefinitely, but get trapped in one compartment.

For more details, see “The Cohesion-Tension Mechanism and the Acquisition of Water by Plant Roots,” by Ernst Steudle (Annual Review of Plant Physiology, Volume 52, Pages 847–875, 2001). Below I reproduce his summary of cohesion-tension theory. Note that 100 MPa is 1000 atm!
  • Water has high cohesive forces. It can be subjected to from some ten to several hundred MPa before columns break. When subjected to tensions, water is in a metastable state, i.e. pressure in xylem vessels is much smaller than the equilibrium water vapor pressure at the given temperature. 
  •  Walls of vessels represent the weak part of the system. They may contain air or seeds of water vapor. When a critical tension is reached in the lumen of xylem vessels, pits in vessel walls allow the passage of air through them, resulting in cavitation (embolism). 
  • Water in vessels of higher plants forms a continuous system from evaporating surfaces in the leaves to absorbing surfaces of the roots
and into the soil (soil-plant-air-continuum; SPAC). With few exceptions, water flow within the SPAC is hydraulic in nature, and the system can be described as a network of resistors arranged in series and in parallel. 
  • Evaporation from leaves lowers their water potential and causes water to move from the xylem to evaporating cells across leaf tissue. This reduces the pressure in the xylem, often to values well below zero (vacuum). 
  • Gradients in pressure (water potential) are established along transpiring plants; this causes an inflow of water from the soil into the roots and to the transpiring surfaces in the leaves.
Here is an animation that nicely summarizes this process.

I find the idea of water being hoisted up a tree by tens of atmospheres of tension to be fascinating, if a bit disconcerting. This phenomenon offers a fine example of the important role of physics in biology.

Friday, August 23, 2013

Stealth Nanoparticles Boost Radiotherapy

I hope, dear readers, that you all have been regularly browsing through http://medicalphysicsweb.org, the website from the Institute of Physics dedicated to medical physics news. I was particularly taken by the article published there this week titled “Stealth Nanoparticles Boost Radiotherapy.” Russ Hobbie and I don’t talk about nanoparticles in the 4th edition of Intermediate Physics for Medicine and Biology, but they are a hot topic in biomedical research these days. The article by freelance journalist Cynthia Keen begins
Imagine a microscopic bomb precisely positioned inside a cancer tumour cell that explodes when ignited by a dose of precision-targeted radiotherapy. The cancerous tumour is destroyed. The healthy tissue surrounding it survives.

This scenario may become reality within a decade if research by Massachusetts scientists on using nanoparticles to deliver cancer-fighting drugs proceeds smoothly. Wilfred F Ngwa, a medical physicist in the department of radiation oncology at Brigham and Women's Hospital and Dana Farber Cancer Institute in Boston, described the latest initiative at the AAPM annual meeting, held earlier this month in Indianapolis, IN. 
We discuss radiation therapy in Chapter 16 of IPMB. The trick of radiotherapy is to selectively kill cancer cells while sparing normal tissue. The nanoparticles are designed to target tumors
by applying tumour vasculature-targeted cisplatin, Oxaliplatin or carboplatin [three widely used, platinum-based chemotherapy drugs] nanoparticles during external-beam radiotherapy, a substantial photon-induced boost to tumour endothelial cells can be achieved. This would substantially increase damage to the tumour’s blood vessels, as well as cells that cause cancer to recur, while also delivering chemotherapy with fewer toxicities.
In general, nanoparticles typically have a size on the order of 10 to 100 nm. This size passes easily through the smallest blood vessels, but is too big to pass through ion channels in the cell membrane. It is about the size of a large biomolecule or a small virus. Nanoparticles are used in imaging and therapy. For an overview, see the review by Shashi Murthy (International Journal of Nanomedicine, Volume 2, Pages 129–141, 2007).

The medicalphysicsweb article concludes
“The promising result of using approved platinum-based nanoparticles combined with experimental results of the past two years convince us that our new RAID [radiotherapy application with in situ dose-painting] approach to cancer provides a number of possibilities for customizing and significantly improving radiotherapy,” Ngwa said at the press conference. This research is still in its early stages, with laboratory testing of the new approach in mice ongoing. If tests continue to prove successful, and a grant or private funding is available, it will lead to clinical trials in humans. The researchers are hopeful that they will be able to continue their work without any disruption and to move their novel treatment from laboratory to clinical use. 
Another news story about this research can be found here

Friday, August 16, 2013

We Need Theoretical Physics Approaches to Study Living Systems

An editorial titled “We Need Theoretical Physics Approaches to Study Living Systems,” which was published recently in the journal Physical Biology (Volume 10, Article number 040201), has resonated with me. Krastan Blagoev, Kamal Shukla and Herbert Levine discuss the importance of using simply physical models to understand complicated biological problems. The debate about how much detail to include in mathematical models is a constant source of tension between physicists and biologists, and even between physicists and biomedical engineers. I agree with the editorial’s authors: simple models are vitally important. Biologists (and even more so, medical doctors) put great emphasis on the complexity of their systems. But the value of a simple model is that it highlights the fundamental behavior of a system that is often not obvious from experiments. If we build realistic models including all the complexity, they will be just as difficult to understand as are the experiments themselves. Blagoev, Shukla and Levine say much the same (my italics).
In this editorial, we propose that theoretical physics can play an essential role in making sense of living matter. When faced with a highly complex system, a physicist builds simplified models. Quoting Philip W Anderson’s Nobel prize address, “the art of model-building is the exclusion of real but irrelevant parts of the problem and entails hazards for the builder and the reader. The builder may leave out something genuinely relevant and the reader, armed with too sophisticated an experimental probe, may take literally a schematized model. Very often such a simplified model throws more light on the real working of nature... ” In his formulation, the job of a theorist is to get at the crux of the system by ignoring details and yet to find a testable consequence of the resulting simple picture. This is rather different than the predilection of the applied mathematician who wants to include all the known details in the hope of a quantitative simulacrum of reality. These efforts may be practically useful, but do not usually lead to increased understanding.
In my own research, the best example of simple model building is the prediction of adjacent regions of depolarization and hyperpolarization during electrical stimulation of the heart. Nestor Sepulveda, John Wikswo, and I used the “bidomain model,” which accounts for essential properties of cardiac tissue such as the tissue anisotropy and the relative electrical conductivity of the intracellular and extracellular spaces (Biophysical Journal, Volume 55, Pages 987–999, 1989; I have discussed this study in this blog before). Yet, this model was an enormous simplification. We ignored the opening and closing of ion channels, the membrane capacitance, the curvature of the myocardial fibers, the cellular structure of the tissue, the details of the electrode-tissue interface, the three-dimensional volume of the tissue, and much more. Nevertheless, the model made a nonintuitive qualitative prediction that was subsequently confirmed by experiments. I think the reason this research has made an impact (over 200 citations to the paper so far) is that we were able to strip our model of all the unnecessary details except those key ones underlying the qualitative behavior. The gist of this idea can be found in a quote usually attributed to Einstein: Everything should be made as simple as possible, but no simpler. I must admit, sometimes it pays to be lucky when deciding which features of a model to keep and which to throw out. But it is not all luck; model building is a skill that needs to be learned.

The editorial continues (again, my italics)
A leading biologist once remarked to one of us that a calculation of in vivo cytoskeletal dynamics that did not take into account the fact that the particular cell in question had more than ten isoforms of actin could not possibly be correct. We need to counter that any calculation which takes into account all these isoforms is overwhelmingly likely to be vastly under-constrained and ultimately not useful. Adding more details can often bring us further from reality. Of course, the challenge for models is then falsification, i.e., finding robust predictions which can be directly tested experimentally.
How does one learn and practice model building? One place to start—regular readers of this blog will have already guessed my answer—is the 4th edition of Intermediate Physics for Medicine and Biology. This book, and especially the homework problems at the end of each chapter, provide plenty of examples of model building (for simple models applied to the study of the heart, see Chapter 10, Problems 37–40). I think that this aspect of the book sets it apart from many others texts, which cover the biology in more detail.

Krastan Blagoev is the director of the Physics of Living Systems program at the National Science Foundation. According to the NSF website
The program “Physics of Living Systems” (PoLS) in the Physics Division at the National Science Foundation targets theoretical and experimental research exploring the most fundamental physical processes that living systems utilize to perform their functions in dynamic and diverse environments. The focus should be on understanding basic physical principles that underlie biological function. Proposals that use physics only as a tool to study biological questions are of low priority.
Because I might someday apply for a grant from the PoLS program, let me note that Dr. Blagoev is a gentleman and a scholar, who has done much to advance the application of physics to biology. To learn more about Blagoev, see the April 2008 issue of The Biological Physicist, the newsletter for the Division of Biological Physics of the American Physical Society. Shukla is the director for the “Biomolecular Dynamics, Structure and Function” program at NSF, which I am unlikely ever to seek funding from, so I’ll just say he is probably a good guy too. Levine is the Director of the Center for Theoretical Biological Physics at Rice University.

Friday, August 9, 2013

Martha Chase (1927-2003)

Ten years ago yesterday, the American biologist Martha Chase passed away. Chase is famous for her participation in a fundamental genetics experiment. In collaboration with Alfred Hershey, she performed this experiment in 1952 at Cold Spring Harbor Laboratory (see last week's blog entry).  Their results supported the hypothesis that DNA is the biological molecule that carries genetic information. They showed that the DNA, not the protein, of the bacteriophage T2 (a virus that infects bacteria) entered E. coli upon infection.

The Eighth Day of Creation: The Makers of the Revolution in Biology, by Horance Freeland Judson, suuperimposed on Intermediate Physics for Medicine and Biology.
The Eighth Day of Creation:
The Makers of the Revolution in Biology,
by Horace Freeland Judson.
To describe this experiment, I quote from Horace Freeland Judson’s wonderful book The Eighth Day of Creation: The Makers of the Revolution in Biology.
Hershey and Chase decided to see if they could strip off the empty phage ghosts from the bacteria and find out what they were and where their contents had gone. DNA contains no sulphur; phage protein has no phosphorus. Accordingly, they began by growing phage in a bacterial culture with a radioactive isotope as the only phosphorus in the soup [P32], which was taken up in all the phosphate groups as the DNA of the phage progeny was assembled, or, in the parallel experiment, by growing phage whose coat protein was labelled with hot sulphur [S35]. They used the phage to infect fresh bacteria in broths that were not radioactive, and a few minutes after infection tried to separate the bacteria from the emptied phage coats. “We tried various grinding arrangements, with results that weren’t very encouraging,” Hershey wrote later. Then they made a technological breakthrough, in the best Delbruck fashion of homely improvisation. “When Margaret McDonald loaned us her blender the experiment promptly succeeded.”
This ordinary kitchen blender provided just the right shear forces to strip the empty bacteriophage coats off the bacteria. When tested, those bacteria infected by phages containing radioactive phosphorus were themselves radioactive, but those infected by phages containing radioactive sulphur were not. Thus, the DNA and not the protein is the genetic material responsible for infection. This was truly an elegant experiment. They key was the use of radioactive tracers. Russ Hobbie and I discuss nuclear physics and nuclear medicine in Chapter 17 of the 4th edition of Intermediate Physics for Medicine and Biology. We focus on medical applications of radioactive isotopes, but we should remember that these tracers also have played a crucial role in experiments in basic biology.

Hershey and Chase’s experiment, often called the Warring Blender experiment, is a classic studied in introductory biology classes. It was the high point of Chase’s career. She obtained her bachelor’s degree from the College of Wooster and was then hired by Hershey to work in his Cold Spring Harbor laboratory. She stayed at Cold Spring Harbor only three years, but in that time she and Hershey performed their famous experiment. In 1964 she obtained her PhD from the University of Southern California. Unfortunately, things did not go so well from Chase after that. Writer Milly Dawson tells the story.
In the late 1950s in California, she had met and married a fellow scientist, Richard Epstein, but they soon divorced… Chase suffered several other personal setbacks, including a job loss, in the late 1960s, a period that saw the end of her scientific career. Later, she experienced decades of dementia, with long-term but no short-term memory. [Waclow] Szybalski [a colleague at Cold Spring Harbor Laboratory in the 1950s] remembered his friend as “a remarkable but tragic person.”
A good description of the Hershey-Chase experiment can be found here. You can learn more about life of Martha Chase in obituaries here and here.  Szybalski’s reminiscences are recording in a Cold Spring Harbor oral history available here. Dawson’s tribute can be found here. And most importantly, the 1952 Hershey-Chase paper can be found here.

Friday, August 2, 2013

Cold Spring Harbor Laboratory

A photograph of me standing next to the entrance of Cold Spring Harbor Laboratory.
Me standing next to the entrance of
Cold Spring Harbor Laboratory.
Last week my wife, my mother-in-law, and I made a brief trip to Long Island, New York, where we made a quick stop at the Cold Spring Harbor Laboratory. What a lovely setting for a research center. We drove around the grounds, looking at the various labs. It sits right on a bay off the Long Island Sound, and looks more like a resort than a scientific laboratory. James Watson, of DNA fame, was the long-time director of Cold Spring Harbor Lab.

In the last few years, the lab has begun a thrust into “Quantitative Biology.” This area of research has much overlap with the 4th edition of Intermediate Physics for Medicine and Biology. I view this development as evidence that science is going in “our direction,” toward a larger role for physics and math in medicine and biology. The Cold Spring Harbor website describes the new Simons Center for Quantitative Biology.
Cold Spring Harbor Laboratory (CSHL) has recently opened the Simons Center for Quantitative Biology (SCQB). The areas of expertise in the SCQB include applied mathematics, computer science, theoretical physics, and engineering. Members of the SCQB will interact closely with other CSHL researchers and will apply their approaches to research areas including genomic analysis, population genetics, neurobiology, evolutionary biology, and signal and image processing.
We passed by CSHL during a trip that included stops at Sagamore Hill National Historic Site in Oyster Bay (President Theodore Roosevelt’s home), Planting Fields Arboretum, and the Montauk Point Lighthouse.

Friday, July 26, 2013

Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields

Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields, by Jakko Malmivuo and Robert Plonsey, superimposed on Intermediate Physics for Medicine and Biology.
Bioelectromagnetism,
by Malmivuo and Plonsey.
A good textbook about bioelectricity and biomagnetism is Bioelectromagnetism: Principles and Applications of Bioelectric and Biomagnetic Fields by Jaakko Malmivuo and Robert Plonsey (Oxford University Press, 1995). One of the best features of the book is that it is available for free online at www.bem.fi/book/index.htm. The book covers many of the topics Russ Hobbie and I discuss in Chapters 6–9 of the 4th edition of Intermediate Physics for Medicine and Biology: the cable equation, the Hodgkin and Huxley model, patch-clamp recordings, the electrocardiogram, biomagnetism, the bidomain model, and magnetic stimulation. The book’s introduction outlines its eight parts:
Part I discusses the anatomical and physiological basis of bioelectromagnetism. From the anatomical perspective, for example, Part I considers bioelectric phenomena first on a cellular level (i.e., involving nerve and muscle cells) and then on an organ level (involving the nervous system (brain) and the heart).

Part II introduces the concepts of the volume source and volume conductor and the concept of modeling. It also introduces the concept of impressed current source and discusses general theoretical concepts of source-field models and the bidomain volume conductor. These discussions consider only electric concepts.

Part III explores theoretical methods and thus anatomical features are excluded from discussion. For practical (and historical) reasons, this discussion is first presented from an electric perspective in Chapter 11. Chapter 12 then relates most of these theoretical methods to magnetism and especially considers the difference between concepts in electricity and magnetism.

The rest of the book (i.e., Parts IV–IX) explores clinical applications. For this reason, bioelectromagnetism is first classified on an anatomical basis into bioelectric and bio(electro)magnetic constituents to point out the parallelism between them. Part IV describes electric and magnetic measurements of bioelectric sources of the nervous system, and Part V those of the heart.

In Part VI, Chapters 21 and 22 discuss electric and magnetic stimulation of neural and Part VII, Chapters 23 and 24, that of cardiac tissue. These subfields are also referred to as electrobiology and magnetobiology. Part VIII focuses on Subdivision III of bioelectromagnetism—that is, the measurement of the intrinsic electric properties of biological tissue. Chapters 25 and 26 examine the measurement and imaging of tissue impedance, and Chapter 27 the measurement of the electrodermal response.

In Part IX, Chapter 28 introduces the reader to a bioelectric signal that is not generated by excitable tissue: the electro-oculogram (EOG). The electroretinogram (ERG) also is discussed in this connection for anatomical reasons, although the signal is due to an excitable tissue, namely the retina.
Jaakko Malmivuo is a Professor in the School of Electrical Engineering at Aalto University in Helsinki, Finland. He is also the director of the Ragnar Granit Institute.

Robert Plonsey is the Pfizer-Pratt University Professor Emeritus of Biomedical Engineering at Duke University. This year, he received the IEEE Biomedical Engineering Award “for developing quantitative methods to characterize the electromagnetic fields in excitable tissue, leading to a better understanding of the electrophysiology of nerve, muscle, and brain.” Plonsey is cited on 16 pages of Intermediate Physics for Medicine and Biology, the most of any scientist or author.

Friday, July 19, 2013

Reinventing Physics For Life-Science Majors

The July issue of Physics Today contained an article by Dawn Meredith and Joe Redish titled “Reinventing Physics for Life-Science Majors.” Much in the article is relevant to the 4th edition of Intermediate Physics for Medicine and Biology. The main difference between the goals of their article and IPMB is that they discuss the introductory physics course, whereas Russ Hobbie and I wrote an intermediate-level text. Nevertheless, many of the aims remain the same. Meredith and Redish begin
Physics departments have long been providing service courses for premedical students and biology majors. But in the past few decades, the life sciences have grown explosively as new techniques, new instruments, and a growing understanding of biological mechanisms have enabled biologists to better understand the physiochemical processes of life at all scales, from the molecular to the ecological. Quantitative measurements and modeling are emerging as key biological tools. As a result, biologists are demanding more effective and relevant undergraduate service classes in math, chemistry, and physics to help prepare students for the new, more quantitative life sciences.
Their section on what skills should students learn reads like a list of goals for IPMB:
  • Drawing inferences from equations…. 
  • Building simple quantitative models…. 
  • Connecting equations to physical meaning…. 
  • Integrating multiple representations…. 
  • Understanding the implications of scaling and functional dependence…. 
  • Estimating….”
Meredith and Redish realize the importance of developing appropriate homework problems for life-science students, which is something Russ and I have spent an enormous amount of time on when revising IPMB. “We have spent a good deal of time in conversation with our biology colleagues and have created problems of relevance to them that are also doable by students in an introductory biology course.” They then offer a delightful problem about calculating how big a worm can grow (see their Box 4). They also include a photo of a “spherical cow”; you need to see it to understand. And they propose the Gauss gun (see a video here) as a model for exothermic reactions. They conclude
Teaching physics to biology students requires far more than watering down a course for engineers and adding in a few superficial biological applications. What is needed is for physicists to work closely with biologists to learn not only what physics topics and habits of mind are useful to biologists but also how the biologist’s work is fundamentally different from ours and how to bridge that gap. The problem is one of pedagogy, not just biology or physics, and solving it is essential to designing an IPLS [Introductory Physics for the Life Sciences] course that satisfies instructors and students in both disciplines.

Friday, July 12, 2013

The Bohr Model

One hundred years ago this month, Niels Bohr published his model of the atom (“On the Constitution of Atoms and Molecules,” Philosophical Magazine, Volume 26, Pages 1–25, 1913). In the May 2013 issue of Physics Today, Helge Kragh writes
Published in a series of three papers in the summer and fall of 1913, Niels Bohr’s seminal atomic theory revolutionized physicists’ conception of matter; to this day it is presented in high school and undergraduate-level textbooks.
The Making of the Atomic Bomb, by Richard Rhodes, superimposed on Intermediate Physics for Medicine and Biology.
The Making of the Atomic Bomb,
by Richard Rhodes.
I find Bohr’s model fascinating for several reasons: 1) it was the first application of quantum ideas to atom structure, 2) it predicts the size of the atom, 3) it implies discrete atomic energy levels, 4) it explains the hydrogen spectrum in terms of transitions between energy levels, and 5) it provides an expression for the Rydberg constant in terms of fundamental parameters. In his book The Making of the Atomic Bomb, Richard Rhodes discusses the background leading to Bohr’s discovery.
Johann Balmer, a nineteenth-century Swiss mathematical physicist, identified in 1885 … a formula for calculating the wavelengths of the spectral lines of hydrogen… A Swedish spectroscopist, Johannes Rydberg, went Balmer one better and published in 1890 a general formula valid for a great many different line spectra. The Balmer formula then became a special case of the more general Rydberg equation, which was built around a number called the Rydberg constant [R]. That number, subsequently derived by experiment and one of the most accurately known of all universal constants, takes the precise modern value of 109,677 cm−1.

Bohr would have known these formulae and numbers from undergraduate physics, especially since Christensen [Bohr’s doctorate advisor] was an admirer of Rydberg and had thoroughly studied his work. But spectroscopy was far from Bohr’s field and he presumably had forgotten them. He sought out his old friend and classmate, Hans Hansen, a physicist and student of spectroscopy just returned from Gottingen. Hansen reviewed the regularity of the line spectra with him. Bohr looked up the numbers. “As soon as I saw Balmer’s formula,” he said afterward, “the whole thing was immediately clear to me.”

What was immediately clear was the relationship between his orbiting electrons and the lines of spectral light… The lines of the Balmer series turn out to be exactly the energies of the photons that the hydrogen electron emits when it jumps down from orbit to orbit to its ground state. Then, sensationally, with the simple formula R = 2π2me4/h3 (where m is the mass of the electron, e the electron charge and h Planck’s constant—all fundamental numbers, not arbitrary numbers Bohr made up) Bohr produced Rydberg’s constant, calculating it within 7 percent of its experimentally measured value!...
In chapter 14 of the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the Bohr model, but interestingly we do not attribute the model to Bohr. However, at other locations in the book, we casually refer to Bohr’s model by name: see Problem 33 of Chapter 15 where we mention “Bohr orbits,” and Sections 15.9 and 16.1.1 where we refer to the “Bohr formula.” I guess we assumed that everyone knows what the Bohr model is (a pretty safe assumption for readers of IPMB). In Problem 4 of Chapter 14 (one of the new homework problems in the 4th edition), the reader is asked to derive the expression for the Rydberg constant in terms of fundamental parameters (you don’t get exactly the same answer as in the quote above; presumably Rhodes didn’t use SI units).

Bohr would become one the principal figures in the development of modern quantum mechanics. He also made fundamental contributions to nuclear physics, and contributed to the Manhattan project. He was awarded the Nobel Prize in Physics in 1922 “for his services in the investigation of the structure of atoms and of the radiation emanating from them.” He is Denmark’s most famous scientist, and for years he led the Institute of Theoretical Physics at the University of Copenhagen. A famous play, titled Copenhagen is about his meeting with former collaborator Werner Heisenberg in then-Nazi-controlled Denmark in 1941. Here is a clip.

Bohr and Heisenberg discussing the uncertainty principle, in Copenhagen.

Physicists around the world are celebrating this 100-year anniversary; for instance here, here, here and here.

I end with Bohr’s own words: an excerpt from the introduction of his first 1913 paper (references removed).
In order to explain the results of experiments on scattering of α rays by matter Prof. Rutherford has given a theory of the structure of atoms. According to this theory, the atoms consist of a positively charged nucleus surrounded by a system of electrons kept together by attractive forces from the nucleus; the total negative charge of the electrons is equal to the positive charge of the nucleus. Further, the nucleus is assumed to be the seat of the essential part of the mass of the atom, and to have linear dimensions exceedingly small compared with the linear dimensions of the whole atom. The number of electrons in an atom is deduced to be approximately equal to half the atomic weight. Great interest is to be attributed to this atom-model; for, as Rutherford has shown, the assumption of the existence of nuclei, as those in question, seems to be necessary in order to account for the results of the experiments on large angle scattering of the α rays.

In an attempt to explain some of the properties of matter on the basis of this atom-model we meet however, with difficulties of a serious nature arising from the apparent instability of the system of electrons: difficulties purposely avoided in atom-models previously considered, for instance, in the one proposed by Sir J. J. Thomson. According to the theory of the latter the atom consists of a sphere of uniform positive electrification, inside which the electrons move in circular orbits. The principal difference between the atom-models proposed by Thomson and Rutherford consists in the circumstance the forces acting on the electrons in the atom-model of Thomson allow of certain configurations and motions of the electrons for which the system is in a stable equilibrium; such configurations, however, apparently do not exist for the second atom-model. The nature of the difference in question will perhaps be most clearly seen by noticing that among the quantities characterizing the first atom a quantity appears—the radius of the positive sphere—of dimensions of a length and of the same order of magnitude as the linear extension of the atom, while such a length does not appear among the quantities characterizing the second atom, viz. the charges and masses of the electrons and the positive nucleus; nor can it be determined solely by help of the latter quantities.

The way of considering a problem of this kind has, however, undergone essential alterations in recent years owing to the development of the theory of the energy radiation, and the direct affirmation of the new assumptions introduced in this theory, found by experiments on very different phenomena such as specific heats, photoelectric effect, Röntgen [etc]. The result of the discussion of these questions seems to be a general acknowledgment of the inadequacy of the classical electrodynamics in describing the behaviour of systems of atomic size. Whatever the alteration in the laws of motion of the electrons may be, it seems necessary to introduce in the laws in question a quantity foreign to the classical electrodynamics, i. e. Planck’s constant, or as it often is called the elementary quantum of action. By the introduction of this quantity the question of the stable configuration of the electrons in the atoms is essentially changed as this constant is of such dimensions and magnitude that it, together with the mass and charge of the particles, can determine a length of the order of magnitude required. This paper is an attempt to show that the application of the above ideas to Rutherford’s atom-model affords a basis for a theory of the constitution of atoms. It will further be shown that from this theory we are led to a theory of the constitution of molecules.

In the present first part of the paper the mechanism of the binding of electrons by a positive nucleus is discussed in relation to Planck’s theory. It will be shown that it is possible from the point of view taken to account in a simple way for the law of the line spectrum of hydrogen. Further, reasons are given for a principal hypothesis on which the considerations contained in the following parts are based.

I wish here to express my thanks to Prof. Rutherford his kind and encouraging interest in this work.