Showing posts with label my own research. Show all posts
Showing posts with label my own research. Show all posts

Friday, April 28, 2023

Biomagnetism: The First Sixty Years

Roth, B. J., 2023, Biomagnetism: The first sixty years. Sensors, 23:4218.
Roth, B. J., 2023,
Biomagnetism: The first sixty years
.
Sensors
, 23:4218.
The last two blog posts have dealt with biomagnetism: the magnetic fields produced by our bodies. Some of you might have noticed hints about how these posts originated in “another publication.” That other publication is now published! This week, my review article “Biomagnetism: The First Sixty Years” appeared in the journal Sensors. The abstract is given below.
Biomagnetism is the measurement of the weak magnetic fields produced by nerves and muscle. The magnetic field of the heart—the magnetocardiogram (MCG)—is the largest biomagnetic signal generated by the body and was the first measured. Magnetic fields have been detected from isolated tissue, such as a peripheral nerve or cardiac muscle, and these studies have provided insights into the fundamental properties of biomagnetism. The magnetic field of the brain—the magnetoencephalogram (MEG)—has generated much interest and has potential clinical applications to epilepsy, migraine, and psychiatric disorders. The biomagnetic inverse problem, calculating the electrical sources inside the brain from magnetic field recordings made outside the head, is difficult, but several techniques have been introduced to solve it. Traditionally biomagnetic fields are recorded using superconducting quantum interference device (SQUID) magnetometers, but recently new sensors have been developed that allow magnetic measurements without the cryogenic technology required for SQUIDs.

The “First Sixty Years” refers to this year (2023) being six decades since the original biomagnetism publication in 1963, when Baule and McFee first measured the magnetocardiogram. 

My article completes a series of six reviews I’ve published in the last few years. 

Get the whole set! All are open access except the first. If you need a copy of that one, just email me at roth@oakland.edu and I’ll send you a pdf.

I’m not preparing any other reviews, so this will probably be the last one. But, you never know. 

You can learn more about biomagnetism in Chapter 8 of Intermediate Physics for Medicine and Biology.

Enjoy! 

A word cloud derived from "Biomagnetism: The First Sixty Years."


 

Friday, April 21, 2023

The Magnetic Field Associated with a Plane Wave Front Propagating Through Cardiac Tissue

When I was on the faculty at Vanderbilt University, my student Marcella Woods and I examined the magnetic field produced by electrical activity in a sheet of cardiac muscle. I really like this analysis, because it provides a different view of the mechanism producing the magnetic field compared to that used by other researchers studying the magnetocardiogram. In another publication, here is how I describe our research. I hope you find it useful.
Roth and Marcella Woods examined an action potential propagating through a two-dimensional sheet of cardiac muscle [58]. In Fig. 6, a wave front is propagating to the right, so the myocardium on the left is fully depolarized and on the right is at rest. Cardiac muscle is anisotropic, meaning it has a different electrical conductivity parallel to the myocardial fibers than perpendicular to them. In Fig. 6, the fibers are oriented at an angle to the direction of propagation. The intracellular voltage gradient is in the propagation direction (horizontal in Fig. 6), but the anisotropy rotates the intracellular current toward the fiber axis. The same thing happens to the extracellular current, except that in cardiac muscle the intracellular conductivity is more anisotropic than the extracellular conductivity, so the extracellular current is not rotated as far. Continuity requires that the components of the intra- and extracellular current densities in the propagation direction are equal and opposite. Their sum therefore points perpendicular to the direction of propagation, creating a magnetic field that comes out of the plane of the tissue on the left and into the plane on the right (Fig. 6) [58–60].
Figure 6. The current and magnetic field produced by a planar wave front propagating in a two-dimensional sheet of cardiac muscle. The muscle is anisotropic with a higher conductivity along the myocardial fibers.

This perspective of the current and magnetic field in cardiac muscle is unlike that ordinarily adopted when analyzing the magnetocardiogram, where the impressed current is typically taken as in the same direction as propagation. Nonetheless, experiments by Jenny Holzer in Wikswo’s lab confirmed the behavior shown in Fig. 6 [61].

The main references are:

58. Roth, B.J.; Woods, M.C. The magnetic field associated with a plane wave front propagating through cardiac tissue. IEEE Trans. Biomed. Eng. 1999, 46, 1288–1292.

61. Holzer, J.R.; Fong, L.E.; Sidorov, V.Y.; Wikswo, J.P.; Baudenbacher, F. High resolution magnetic images of planar wave fronts reveal bidomain properties of cardiac tissue. Biophys. J. 2004, 87, 4326–4332. 

You can learn more about how magnetic fields are generated by cardiac muscle by reading about what happens at the apex of the heart. Or, solve homework problem 19 in Chapter 8 of Intermediate Physics for Medicine and Biology.

Friday, March 24, 2023

Three New Reviews

Over the last couple years, I’ve been writing lots of review articles. In the last few weeks three have been published. All of them are open access, so you can read them without a subscription.

Can MRI be Used as a Sensor to Record Neural Activity?

Can MRI be Used as a Sensor
to Record Neural Activity?
This review asks the question “Can MRI be Used as a Sensor to Record Neural Activity?” The article is published in the journal Sensors (Volume 23, Article Number 1337). The abstract is reproduced below.
Magnetic resonance provides exquisite anatomical images and functional MRI monitors physiological activity by recording blood oxygenation. This review attempts to answer the following question: Can MRI be used as a sensor to directly record neural behavior? It considers MRI sensing of electrical activity in the heart and in peripheral nerves before turning to the central topic: recording of brain activity. The primary hypothesis is that bioelectric current produced by a nerve or muscle creates a magnetic field that influences the magnetic resonance signal, although other mechanisms for detection are also considered. Recent studies have provided evidence that using MRI to sense neural activity is possible under ideal conditions. Whether it can be used routinely to provide functional information about brain processes in people remains an open question. The review concludes with a survey of artificial intelligence techniques that have been applied to functional MRI and may be appropriate for MRI sensing of neural activity.

Parts of the review may be familiar to readers of this blog. For instance, in June of 2016 I wrote about Yoshio Okada’s experiment to measure neural activation in a brain cerebellum of a turtle, in August 2019 I described Allen Song’s use of spin-lock methods to record brain activity, and in April 2020 I discussed J. H. Nagel’s 1984 abstract that may have been the first to report using MRI to image action currents. All these topics are featured in my review article. In addition, I analyzed my calculation, performed with graduate student Dan Xu, of the magnetic field produced inside the heart, and I reviewed my work with friend and colleague Ranjith Wijesinghe, from Ball State University, on MRI detection of bioelectrical activity in the brain and peripheral nerves. At the end of the review, I examined the use of artificial intelligence to interpret this type of MRI data. I don’t really know much about artificial intelligence, but the journal wanted me to address this topic so I did. With AI making so much news these days (ChatGPT was recently on the cover of TIME magazine!), I’m glad I included it.

Readers of Intermediate Physics for Medicine and Biology will find this review to be a useful extension of Section 18.12 (“Functional MRI”), especially the last paragraph of that section beginning with “Much recent research has focused on using MRI to image neural activity directly, rather than through changes in blood flow...”

Magneto-Acoustic Imaging in Biology

Magneto-Acoustic Imaging in Biology
Next is “Magneto-Acoustic Imaging in Biology,” published in the journal Applied Sciences (Volume 13, Article Number 3877). The abstract states

This review examines the use of magneto-acoustic methods to measure electrical conductivity. It focuses on two techniques developed in the last two decades: Magneto-Acoustic Tomography with Magnetic Induction (MAT-MI) and Magneto-Acousto-Electrical Tomography (MAET). These developments have the potential to change the way medical doctors image biological tissue.
The only place in IPMB where Russ Hobbie and I talked about these topics is in Homework Problem 31 in Chapter 8, which analyzes a simple example of MAT-MI.

A Mathematical Model of Mechanotransduction

A Mathematical Model of Mechanotransduction
Finally comes “A Mathematical Model of Mechanotransduction” in the new journal Academia Biology (Volume 1; I can’t figure out what the article number is?!).

This article reviews the mechanical bidomain model, a mathematical description of how the extracellular matrix and intracellular cytoskeleton of cardiac tissue are coupled by integrin membrane proteins. The fundamental hypothesis is that the difference between the intracellular and extracellular displacements drives mechanotransduction. A one-dimensional example illustrates the model, which is then extended to two or three dimensions. In a few cases, the bidomain equations can be solved analytically, demonstrating how tissue motion can be divided into two parts: monodomain displacements that are the same in both spaces and therefore do not contribute to mechanotransduction, and bidomain displacements that cause mechanotransduction. The model contains a length constant that depends on the intracellular and extracellular shear moduli and the integrin spring constant. Bidomain effects often occur within a few length constants of the tissue edge. Unequal anisotropy ratios in the intra- and extracellular spaces can modulate mechanotransduction. Insight into model predictions is supplied by simple analytical examples, such as the shearing of a slab of cardiac tissue or the contraction of a tissue sheet. Computational methods for solving the model equations are described, and precursors to the model are reviewed. Potential applications are discussed, such as predicting growth and remodeling in the diseased heart, analyzing stretch-induced arrhythmias, modeling shear forces in a vessel caused by blood flow, examining the role of mechanical forces in engineered sheets of tissue, studying differentiation in colonies of stem cells, and characterizing the response to localized forces applied to nanoparticles.

This review is similar to my article that I discussed in a blog post about a year ago, but better. I originally published it as a manuscript on the bioRxiv, the preprint server for biology, but it received little attention. I hope this version does better. If you want to read this article, download the pdf instead of reading it online. The equations are all messed up on the journal website, but they look fine in the file.

If you put these three reviews together with my previous ones about magnetic stimulation and the bidomain model of cardiac electrophysiology, you have a pretty good summary of the topics I’ve worked on throughout my career. Are there more reviews coming? I’m working feverishly to finish one more. For now, I’ll let you guess the topic. I hope it’ll come out later this year.

Friday, December 30, 2022

The Development of Transcranial Magnetic Stimulation

When I worked at the National Institutes of Health, I studied transcranial magnetic stimulation. In Chapter 8 of Intermediate Physics for Medicine and Biology, Russ Hobbie and I describe this technique to activate neurons in the brain.
Since a changing magnetic field generates an induced electric field, it is possible to stimulate nerve or muscle cells without using electrodes. The advantage is that for a given induced current deep within the brain, the currents in the scalp that are induced by the magnetic field are far less than the currents that would be required for electrical stimulation. Therefore transcranial magnetic stimulation (TMS) is relatively painless.

The method was invented in 1985 and when I arrived at NIH in 1988 the field was new and ripe for analysis. I spent the next seven years calculating electric fields in the brain and determining how the electric field couples to a nerve.

Roth, B. J. (2022) The Development of
Transcranial Magnetic Stimulation,
BOHR International Journal of
Neurology and Neuroscience
,
Volume 1, Pages 8–20.
Recently, I wrote a review article telling the story of how transcranial magnetic stimulation began. You can get a copy at https://www.bohrpub.com/journals/BIJNN/BIJNN_20231102; it is an open access article so everyone is free to download it. The abstract states
This review describes the development of transcranial magnetic stimulation in 1985 and the research related to this technique over the following 10 years. It not only focuses on work done at the National Institutes of Health but provides a survey of other related research as well. Key topics are the calculation of the electric field produced during magnetic stimulation, the interaction of this electric field with a long nerve axon, coil design, the time course of the magnetic stimulation pulse, and the safety of magnetic stimulation.

Readers of this blog will recognize some of the topics from earlier posts, such as the calculation of the induced electric field, determining the site of stimulation along a peripheral nerve, Paul Maccabee’s wonderful article, the four-leaf coil, the heating of metal electrodes, implantable microcoils, and Tony Barker's online interview. You could almost say I pre-wrote much of the review using this blog as my test bed. 

I like magnetic stimulation because it's a classic example of how a fundamental concept from physics can have a major impact in biology and medicine. If you combine this review of transcranial magnetic stimulation together with my earlier review of the bidomain model of cardiac tissue, you get a pretty good summary of my most important research.

Enjoy!

Friday, May 20, 2022

Using the Mechanical Bidomain Model to Analyze the Biomechanical Behavior of Cardiomyocytes

During the decade of 2010–2020, my research shifted from bioelectricity and biomagnetism to biomechanics and mechanotransduction. I took the bidomain model of cardiac electrophysiology—described in Chapter 7 of Intermediate Physics for Medicine and Biology— and adapted it to describe growth and remodeling in response to mechanical forces. In other words, I traded resistors for springs. This effort was not entirely successful, but I think it provided some useful insights.

In 2015 I described the mechanical bidomain model in a chapter of Cardiomyocytes: Methods and Protocols. This book was part of the series Methods in Molecular Biology, and each chapter had a unusual format. The research was outlined, with the details relegated to an extensive collection of endnotes. A second edition of the book was proposed, and I dutifully submitted an updated chapter. However, the new edition never come to pass. Rather than see my chapter go to waste, I offer it to you, dear reader. You can download a draft of my chapter for the second edition here. For those of you who have time only for a summary, below is the abstract.

The mechanical bidomain model provides a macroscopic description of cardiac tissue biomechanics, and also predicts the microscopic coupling between the extracellular matrix and the intracellular cytoskeleton of cardiomyocytes. The goal of this chapter is to introduce the mechanical bidomain model, to describe the mathematical methods required for solving the model equations, to predict where the membrane forces acting on integrin proteins coupling the intracellular and extracellular spaces are large, and to suggest experiments to test the model predictions.

The main difference between the chapter in the first edition and the one submitted for the second was a new section called “Experiments to Test the Mechanical Bidomain Model.” There I describe how the model can reproduce data obtained when studying colonies of embryonic stem cells, sheets of engineered heart tissue, and border zones between normal and ischemic regions in the heart. The chapter ends with this observation:

The most important contribution of mathematical modeling in biology is to make predictions that can be tested experimentally. The mechanical bidomain model makes many predictions, in diverse areas such as development, tissue engineering, and hypertrophy.
I particularly like a new figure in the second edition. It’s a revision of a figure created by Xavier Trepat and Jeffrey Fredberg that compares mechanobiology to a game of tug-of-war. I added the elastic properties of the extracellular space (the green arrows), saying “It is as if the game of tug-of-war is played on a flexible surface, such as a flat elastic sheet.” In other words, tug-of-war on a trampoline

Enjoy!

The “tug-of-war” model of tissue biomechanics, adapted from an illustration by Trepat and Fredberg.
The “tug-of-war” model of tissue biomechanics, adapted from an illustrationby Trepat and Fredberg. Top: the intracellular (yellow), extracellular (green) and integrin (blue) forces acting on a monolayer of cells. Middle: The analogous forces among the players of a game of tug-of-war. This figure is extended beyond that of Trepat and Fredberg by allowing both the intracellular and extracellular spaces to move. Bottom: Representation of the mechanical bidomain model by a ladder of springs.

Friday, November 12, 2021

Bidomain Modeling of Electrical and Mechanical Properties of Cardiac Tissue

This week Biophysics Reviews published my article “Bidomain Modeling of Electrical and Mechanical Properties of Cardiac Tissue” (Volume 2, Article Number 041301, 2021). The introduction states
This review discusses the bidomain model, a mathematical description of cardiac tissue. Most of the review covers the electrical bidomain model, used to study pacing and defibrillation of the heart. For a book-length analysis of this topic, consult the recently published second edition of Cardiac Bioelectric Therapy. In particular, one chapter in that book complements this review: it contains a table listing many bidomain predictions and their experimental confirmation, includes many original figures from earlier publications, and cites additional references. Near the end, the review covers the mechanical bidomain model, which describes mechanotransduction and the resulting growth and remodeling of cardiac tissue.

The review has several aims: to (1) introduce the bidomain model to younger investigators who are bringing new technologies from outside biophysics into cardiac physiology; (2) examine the interaction of theory and experiment in biological physics; (3) emphasize intuitive understanding by focusing on simple models and qualitative explanations of mechanisms; and (4) highlight unresolved controversies and open questions. The overall goal is to enable technologists entering the field to more effectively contribute to some of the pressing scientific questions facing physiologists.

My manuscript traveled a long and winding road. The initial version was a personal account of my career as I worked on the bidomain model (Russ Hobbie and I discuss the bidomain concept in Chapter 7 of Intermediate Physics for Medicine and Biology), and was organized around ten papers I published between 1986 and 2010, with an emphasis on the 1990s. My first draft (and all subsequent ones) benefited from thoughtful comments by my former graduate student, Dilmini Wijesinghe. After I fixed all the problems Dilmini found, I sent the initial version to the editor. He responded that the journal board wanted a more traditional, authoritative review article. That was fine, so I transformed the paper from a memoir into a review, and submitted it officially to the journal. Then the reviewers had a couple rounds of helpful comments, leading to more revisions. Next, there were changes in the page proofs to fulfill all the journal editorial rules. At last, it was published.

The final version is unlike the initial one. I changed the perspective from first person to third; added figures; increased the number of references by almost 50%; and deleted all the reminiscences, colorful anecdotes, and old war stories. 

I hope you enjoy the peer-reviewed, published article. If you want to read the original version (the one with the war stories), you can find it here.  

I made a word cloud based on the article. The giant “Roth” is embarrassing, but otherwise it provides a nice summary of what the paper is about.

Word Cloud of "Bidomain Modeling of Electrical and Mechanical Properties of Cardiac Tissue."

Biophysics Reviews is a new journal, edited by my old friend Kit Parker. Long-time readers of this blog may remember Parker as the guy who said “our job is to find stupid and get rid of it.” Listen to him describe his goals as Editor-in-Chief.

Kit Parker, Editor-in-Chief of Biophysics Reviews, introduces the journal.

https://www.youtube.com/watch?v=2V1fpskjJtM

Friday, June 4, 2021

The Bidomain Model of Cardiac Tissue: Predictions and Experimental Verification

“The Bidomain Model of Cardiac Tissue: Predictions and Experimental Verification” superimopsed on Intermediate Physics for Medicine and Biology.
“The Bidomain Model of Cardiac Tissue:
Predictions and Experimental Verification”

In the early 1990s, I was asked to write a chapter for a book titled Neural Engineering. My chapter had nothing to do with nerves, but instead was about cardiac tissue analyzed with the bidomain model. (You can learn more about the bidomain model in Chapter 7 of Intermediate Physics for Medicine and Biology.) 

“The Bidomain Model of Cardiac Tissue: Predictions and Experimental Verification” was submitted to the editors in January, 1993. Alas, the book was never published. However, I still have a copy of the chapter, and you can download it here. Now—after nearly thirty years—it’s obsolete, but provides a glimpse into the pressing issues of that time.

I was a impudent young buck back in those days. Three times in the chapter I recast the arguments of other scientists (my competitors) as syllogisms. Then, I asserted that their premise was false, so their conclusion was invalid (I'm sure this endeared me to them). All three syllogisms dealt with whether or not cardiac tissue could be treated as a continuous tissue, as opposed to a discrete collection of cells.

The Spach Experiment

The first example had to do with the claim by Madison Spach that the rate of rise of the cardiac action potential, and time constant of the action potential foot, varied with direction.

Continuous cable theory predicts that the time course of the action potential does not depend on differences in axial resistance with direction.

The rate of rise of the cardiac wave front is observed experimentally to depend on the direction of propagation.

Therefore, cardiac tissue does not behave like a continuous tissue.
I then argued that their first premise is incorrect. In one-dimensional cable theory, the time course of the action potential doesn’t depend on axial resistance, as Spach claimed. But in a three-dimensional slab of tissue superfused by a bath, the time course of the action potential depends on the direction of propagation. Therefore, I contended, their conclusion didn’t hold; their experiment did not prove that cardiac tissue isn’t continuous. To this day the issue is unresolved.

Defibrillation

A second example considered the question of defibrillation. When a large shock is applied to the heart, can its response be predicted using a continuous model, or are discrete effects essential for describing the behavior?
An applied current depolarizes or hyperpolarizes the membrane only in a small region near the ends of a continuous fiber.

For successful defibrillation, a large fraction of the heart must be influenced by the stimulus.

Therefore, defibrillation cannot be explained by a continuous model.
I argued that the problem is again with the first premise, which is true for tissue having “equal anisotropy ratios” (the same ratio of conductivity parallel and perpendicular to the fibers, in both the intracellular and extracellular spaces), but is not true for “unequal anisotropy ratios.” (Homework Problem 50 in Chapter 7 of IPMB examines unequal anisotropy ratios in more detail). If the premise is false, the conclusion is not proven. This issue is not definitively resolved even today, although the sophisticated simulations of realistically shaped hearts with their curving fiber geometry, performed by Natalia Trayanova and others, suggest that I was right.

Reentry Induction

The final example deals with the induction of reentry by successive stimulation through a point electrode. As usual, I condensed the existing dogma to a syllogism.
In a continuous tissue, the anisotropy can be removed by a coordinate transformation, so reentry caused by successive stimulation through a single point electrode cannot occur, since there is no mechanism to break the directional symmetry.

Reentry has been produced experimentally by successive stimulation through a single point electrode.

Therefore, cardiac tissue is not continuous.

Once again, that pesky first premise is the problem. In tissue with equal anisotropy ratios you can remove anisotropy by a coordinate transformation, so reentry is impossible. However, if the tissue has unequal anisotropy ratios the symmetry is broken, and reentry is possible. Therefore, you can’t conclude that the observed induction of reentry by successive stimulation through a point electrode implies the tissue is discrete.


I always liked this book chapter, in part because of the syllogisms, in part because of its emphasis on predictions and experiments, but mainly because it provides a devastating counterargument to claims that cardiac tissue acts discretely. Although it was never published, I did send preprints around to some of my friends, and the chapter took on a life of its own. This unpublished manuscript has been cited 13 times!

Trayanova N, Pilkington T (1992) “The use of spectral methods in bidomain studies,” Critical Reviews in Biomedical Engineering, Volume 20, Pages 255–277.

Winfree AT (1993) “How does ventricular tachycardia turn into fibrillation?” In: Borgreffe M, Breithardt G, Shenasa M (eds), Cardiac Mapping, Mt. Kisco NY, Futura, Chapter 41, Pages 655–680.

Henriquez CS (1993) “Simulating the electrical behavior of cardiac tissue using thebidomain model,” Critical Reviews of Biomedical Engineering, Volume 21, Pages 1–77.

Wikswo JP (1994) “The complexities of cardiac cables: Virtual electrode effects,” Biophysical Journal, Volume 66, Pages 551–553.

Winfree AT (1994) “Puzzles about excitable media and sudden death,” Lecture Notes in Biomathematics, Volume 100, Pages 139–150.

Roth BJ (1994) “Mechanisms for electrical stimulation of excitable tissue,” Critical Reviews in Biomedical Engineering, Volume 22, Pages 253–305.

Roth BJ (1995) “A mathematical model of make and break electrical stimulation ofcardiac tissue by a unipolar anode or cathode,” IEEE Transactions on Biomedical Engineering, Volume 42, Pages 1174–1184.

Wikswo JP Jr, Lin S-F, Abbas RA (1995) “Virtual electrodes in cardiac tissue: A common mechanism for anodal and cathodal stimulation,” Biophysical Journal, Volume 69, Pages 2195–2210.

Roth BJ, Wikswo JP Jr (1996) “The effect of externally applied electrical fields on myocardial tissue,” Proceedings of the IEEE, Volume 84, Pages 379–391.

Goode PV, Nagle HT (1996) “On-line control of propagating cardiac wavefronts,” The 18th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Amsterdam.

Winfree AT (1997) “Rotors, fibrillation, and dimensionality,” In: Holden AV, Panfilov AV (eds): Computational Biology of the Heart, Chichester, Wiley, Pages 101–135.

Winfree AT (1997) “Heart muscle as a reaction-diffusion medium: The roles of electric potential diffusion, activation front curvature, and anisotropy,” International Journal of Bifurcation and Chaos, Volume 7, Pages 487–526.

Winfree AT (1998) “A spatial scale factor for electrophysiological models of myocardium,” Progress in Biophysics and Molecular Biology, Volume 69, Pages 185–203.
I’ll end with the closing paragraph of the chapter.
The bidomain model ignores the discrete nature of cardiac cells, representing the tissue as a continuum instead. Experimental evidence is often cited to support the hypothesis that the discrete nature of the cells plays a key role in cardiac electrophysiology. In each case, the bidomain model offers an alternative explanation for the phenomena. It seems wise at this point to reconsider the evidence that indicates the significance of discrete effects in healthy cardiac tissue. The continuous bidomain model explains the data, recorded by Spach and his colleagues, showing different rates of rise during propagation parallel and perpendicular to the fibers, anodal stimulation, arrhythmia development by successive stimulation from a point source, and possibly defibrillation. Of course, these alternative explanations do not imply that discrete effects are not responsible for these phenomena, but only that two possible mechanisms exist rather than one. Experiments must be found that differentiate unambiguously between alternative models. In addition, discrete junctional resistance must be incorporated into the bidomain model. Only when such experiments are performed and the models are further developed will we be able to say with any certainty that cardiac tissue can be described as a continuum.

Friday, February 12, 2021

A Mechanism for the Dip in the Strength-Interval Curve During Anodal Stimulation of Cardiac Tissue

Scientific articles aren’t published until they’ve undergone peer review. When a manuscript is submitted to a scientific journal, the editor asks several experts to read it and provide their recommendation. All my papers were reviewed and most were accepted and published, although usually after a revision. Today, I’ll tell you about one of my manuscripts that did not survive peer review. I’m glad it didn’t.

In the early 1990s, I was browsing in the library at the National Institutes of Health—where I worked—and stumbled upon an article by Egbert Dekker about the dip in the anodal strength-interval curve.

Dekker, E. (1970)  “Direct Current Make and Break Thresholds for Pacemaker Electrodes on the Canine Ventricle,” Circulation Research, Volume 27, Pages 811–823.
In Dekker’s experiment, he stimulated a dog heart twice: first (S1) to excite an action potential, and then again (S2) during or after the refractory period. You expect that for a short interval between S1 and S2 the tissue is still refractory, or unexcitable, and you’ll get no response to S2. Wait a little longer and the tissue is partially refractory; you’ll excite a second action potential if S2 is strong enough. Wait longer still and the tissue will have returned to rest; a weak S2 will excite it. So, a plot of S2 threshold strength versus S1-S2 interval (the strength-interval curve) ought to decrease.

Dekker observed that the strength-interval curve behaved as expected when S2 was provided by a cathode (an electrode having a negative voltage). A positive anode, however, produced a strength-interval curve containing a dip. In other words, there was an oddball section of the anodal curve that increased with the interval. 

The cathodal and anodal strength-interval curves.

Moreover, Dekker observed two types of excitation: make and break. Make occurred after a stimulus pulse began, and break after it ended. Both anodal and cathodal stimuli could cause make and break excitation. (For more about make and break, see my previous post.)

I decided to examine make and break excitation and the dip in the anodal strength-interval curve using a computer simulation. The bidomain model (see Section 7.9 in Intermediate Physics for Medicine and Biology) represented the anisotropic electrical properties of cardiac tissue. The introduction of the resulting paper stated

In this study, my primary goal is to present a hypothesis for the mechanism of the dip in the anodal strength-interval curve: The dip arises from a complex interaction between anode-break and anode-make excitation. This hypothesis is explored in detail and supported by numerical calculations using the bidomain model. The same mechanism may explain the no-response phenomenon. I also consider the induction of periodic responses [a cardiac arrhythmia] from a premature anodal stimulus. The bidomain model was used previously to investigate the cathodal strength-interval curve; in this study, these calculations are extended to investigate anodal stimulation.
When I submitted this manuscript to a journal, it was rejected! Why? It contained a fatal flaw. To represent how the membrane ion channels opened and closed, I had used the Hodgkin and Huxley model, appropriate for a nerve axon. Yet, the nerve and cardiac action potentials are different. For example, the action potential in the heart lasts a hundred times longer than in a nerve.

After swearing and pouting, I calmed down and redid the calculation using an ion channel model more appropriate for cardiac tissue, and then published a series of papers that are among my best.
Roth, B. J. (1995) “A Mathematical Model of Make and Break Electrical Stimulation of Cardiac Tissue by a Unipolar Anode or Cathode,” IEEE Transactions on Biomedical Engineering, Volume 42, Pages 1174-1184.

Roth, B. J. (1996) “Strength-Interval Curves for Cardiac Tissue Predicted Using the Bidomain Model,” Journal of Cardiovascular Electrophysiology, Volume 7, Pages 722-737.

Roth, B. J. (1997) “Nonsustained Reentry Following Successive Stimulation of Cardiac Tissue Through a Unipolar Electrode,” Journal of Cardiovascular Electrophysiology, Volume 8, Pages 768-778.
I kept a copy of the rejected paper (you can download it here). It’s interesting for what it got right, and what it got wrong.
 
The response of cardiac tissue to S1/S2 stimulation, for a cathode (top) and anode (bottom).
"Strength" is the S2 strength, and "Interval" is the S1-S2 interval.
"No Response" (N) means S2 did not excite an action potential,
"Make" means an action potential was excited after S2 turned on,
"Break" means an action potential was excited after S2 turned off, and
"E" means one (gray) or more (black) extra action potentials were triggered by S2 (reentry).
Beware! These calculations were from my rejected paper.

What it got right: The paper identified make and break regions of the strength-interval curve, predicted a dip in the anodal curve but not the cathodal curve, and produced reentry for strong stimuli near the make/break transition. It even reproduced the no-response phenomenon, in which a strong stimulus excites an action potential but an even stronger stimulus does not.

What it got wrong: Cathode-break excitation was missing. The mechanism for anode-break excitation was incorrect. The Hodgkin-Huxley model predicts that anode-break excitation arises from the ion channel kinetics (for the cognoscenti, hyperpolarization removes sodium channel inactivation). This type of anode-break excitation doesn’t happen in the heart but did occur in my simulations, leading me astray. This wrong anode-break mechanism led to wrong explanations for the dip in the anodal strength-interval curve and the no-response phenomenon. (For the correct mechanism, look here.)

Below I reproduce the final paragraph of the manuscript, with the parts that were wrong in red.
“What useful conclusions result from these simulations?” is a fair question, given the limitations of the model. I believe the primary contribution is a hypothetical mechanism for the dip in the anodal strength-interval curve. The dip may arise from a complex interaction of anode-break and anode-make stimulation: A nonpropagating active response at the virtual cathode raises the threshold for anode-break stimulation under the anode. The same interaction could explain the no-response phenomenon. A second contribution is a hypothesis for the mechanism generating periodic responses to strong anodal stimuli: Anode-make stimulation cannot propagate back toward the anode because of the strong hyperpolarization, and the subsequent excitation of the tissue under the anode occurs with sufficient delay that a reentrant loop arises. This hypothesis is related to, but not the same as, the one presented by Saypol and Roth for cathodally induced periodic responses. These mechanisms are suggested by my numerical simulation using a simplified model; whether they play a role in the behavior of real cardiac tissue is unknown. Hopefully, my results will encourage more accurate simulations and, even more importantly, additional experimental measurements of the spatial-temporal distribution of transmembrane potential around the stimulating electrode during premature stimulation of cardiac tissue.
Even though this manuscript was flawed, it foreshadowed much of my research program for the mid 1990s; it was all there, in the rough. Moreover, in this case the reviewers were right and I was wrong. At the time, I was angry that anyone would reject my paper. Now, in retrospect, I realize they did me a favor; I benefited from their advice. For any young scientist who might be reading this post, don’t be too discouraged by critical reviews and rejection. Give yourself a day to whine and fuss, then fix the problems that need fixing and move on. That’s the way peer review works.

Friday, December 4, 2020

Role of Virtual Electrodes in Arrhythmogenesis: Pinwheel Experiment Revisited

The Journal of Cardiovascular Electrophysiology, with a figure from Lindblom et al. on the cover, superimposed on Intermediate Physics for Medicine and Biology.
The Journal of Cardiovascular Electrophysiology,
with a figure from Lindblom et al. on the cover.

Twenty years ago, I published an article with Natalia Trayanova and her student Annette Lindblom about initiating an arrhythmia in cardiac muscle (“Role of Virtual Electrodes in Arrhythmogenesis: Pinwheel Experiment Revisited,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 274-285, 2000). We performed computer simulations based on the bidomain model, which Russ Hobbie and I discuss in Section 7.9 of Intermediate Physics for Medicine and Biology. A key feature of a bidomain is anisotropy: the electrical conductivity varies with direction relative to the axis of the myocardial fibers.

Our results are summarized in the figure below (Fig. 14 of our article). An initial stimulus (S1) launched a planar wavefront through the tissue, either parallel to (longitudinal, L) or perpendicular to (transverse, T) the fibers (horizontal). As the tissue recovered from the first wave front, we applied a second stimulus (S2) to a point cathodal electrode (C), inducing a complicated pattern of depolarization under the cathode and two regions of hyperpolarization (virtual anodes) adjacent to the cathode along the fiber axis (see my previous blog post for more about how cardiac tissue responds to a point stimulus). In some simulations, we reversed the polarity of S2 so the electrode was an anode (A). This pair of stimuli (S1-S2) underlies the “pinwheel experiment” that has been studied by many investigators, but never before using the anisotropic bidomain model. 

Fig. 14 from Lindblom et al. (2000).

We found a variety of behaviors, depending on the direction of the S1 wave front, the polarity of the S2 stimulus, and the time between S1 and S2, known as the coupling interval (CI). In some cases, we induced a figure-of-eight reentrant circuit: an arrhythmia consisting of two spiral waves, one rotating clockwise and the other counterclockwise. In other cases, we induced quatrefoil reentry: an arrhythmia consisting of four spiral waves (see my previous post for more about the difference between these two behaviors).

I began working on these calculations in the winter of 1999, shortly after I arrived at Oakland University as an Assistant Professor. The photograph below is of a page from my research notebook on March 5 showing initial results, including my first observation of quatrefoil reentry in the pinwheel experiment (look for “Quatrefoil!”).

The March 5, 1999 entry from my research notebook,
showing my first observation of quatrefoil reentry
induced during the pinwheel experiment.

A few weeks later I got a call from my friend Natalia (see my previous post about an earlier collaboration with her). She was organizing a session for the IEEE Engineering in Medicine and Biology Society conference, to be held in Atlanta that October, and asked me to give a talk. We got to chatting and she started to describe simulations she and Lindblom were doing. They were the same calculations I was analyzing! I told her about my results, and we decided to collaborate on the project, which ultimately led to our Journal of Cardiovascular Electrophysiology paper.

Our article was full of beautiful color figures showing the different types of arrhythmias. Below is a photo of two pages of the article. Those familiar with my previous publications will notice that the color scheme representing the transmembrane potential is different than what I usually used. Lindblom and Trayanova had their own color scale, and we decided to adopt it rather than mine. One of the figures was featured on the cover of the March, 2000 issue the journal. Lindblom made some lovely movies to go along with these figures, but they’re now lost in antiquity. I later discovered that a simple cellular automata model could reproduce many of these results (see my previous post for details).

Two pages from Lindblom et al. (2000),
showing some of the color figures.

The editor asked Art Winfree to write an editorial to go along with our article (see my previous post about Winfree). I especially like his closing remarks.

This is clearly a landmark event in cardiac electrophysiology at the end of our century. It is sure to have major implications for clinical electrophysiologic work and for defibrillator design.
In retrospect, he was overly optimistic; the paper was an incremental contribution, not a landmark event of the 20th century. But I appreciated his kind words.

Friday, July 10, 2020

An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus

Sometimes the shortest papers are my favorites. Take, for example, an article that I published twenty years ago last month: a two-page communication in the IEEE Transactions on Biomedical Engineering titled “An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus” (Volume 47, Pages 820–821, 2000). It analyzes the electrical stimulation of cardiac tissue, and focuses on the mechanism for inducing an arrhythmia.

The introduction is two short paragraphs (a mere hundred words). The first puts the work in context.
Successive stimulation (S1, then S2) of cardiac tissue can induce reentry. In many cases, an S1 stimulus triggers a propagating action potential that creates a gradient of refractoriness. The S2 stimulus then interacts with this S1 refractory gradient, causing reentry. Many theoretical and experimental studies of reentry induction are variations on this theme [1]–[9].
When I wrote this communication, the critical point hypothesis was a popular explanation for how to induce reentry in cardiac tissue. I cited nine papers discussing this hypothesis, but I associate it primarily with the books of Art Winfree and the experiments of Ray Ideker.
A schematic illustration of the critical point hypothesis. The top panel shows the S1 wave front just before the S2 stimulus. The bottom panel shows the tissue just after the S2 stimulus, and the resulting reentry.
The critical point hypothesis.
The figure above illustrates the critical point hypothesis. A first (S1) stimulus is applied to the right edge of the tissue, launching a planar wavefront that propagates to the left (arrow). By the time of the upper snapshot, the tissue on the right (purple) has returned to rest and recovered excitability, while the tissue on the left (red) remains refractory. The green line represents the boundary between refractory and excitable regions: the line of critical refractoriness.

The lower snapshot is immediately after a second (S2) stimulus is applied through a central cathode (black dot). The tissue near the cathode experiences a strong stimulus above threshold (yellow), while the remaining tissue experiences a weak stimulus below threshold. The green curve represents the boundary between the above-threshold and below-threshold regions: the circle of critical stimulus. S2 only excites tissue that is excitable and has a stimulus above threshold (inside the circle on the right). It launches a wave front that propagates to the right, but cannot propagate to the left because of refractoriness. Only when the refractory tissue recovers excitability will the wave front begin to propagate leftward (curved arrow). Critical points (blue dots) are located where the line of critical refractoriness intersects the circle of critical stimulus. Two spiral waves—a type of cardiac arrhythmia where a wave front circles around a critical point, chasing its tail—rotate clockwise on the bottom and counterclockwise on the top.

A beautiful paper from Ideker’s lab provides evidence supporting the critical point hypothesis: N. Shibata, P.-S. Chen, E. G. Dixon, P. D. Wolf, N. D. Danieley, W. M. Smith, and R. E. Ideker (1988) “Influence of Shock Strength and Timing on Induction of Ventricular Arrhythmias in Dogs,” American Journal of Physiology, Volume 255, Pages H891–H901.

The second paragraph of my communication begins with a question.
Is the S1 gradient of refractoriness essential for the induction of reentry? In this communication, my goal is to show by counterexample that the answer is no. In my numerical simulation, the transmembrane potential is uniform in space before the S2 stimulus. Nevertheless, the stimulus induces reentry.
The critical point hypothesis implies the answer is yes; without a refractory gradient there is no line of critical refractoriness, no critical point, no spiral wave, no reentry. Yet I claimed that the gradient of refractoriness is not essential. To explain why, we must consider what happens following the second stimulus.
An illustration of cathode break excitation, and the resulting quatrefoil reentry.
Cathode break excitation.
The tissue is depolarized (D, yellow) under the cathode but is hyperpolarized (H, purple) in adjacent regions along the fiber direction on each side of the cathode, often called virtual anodes. Hyperpolarization lowers the membrane potential toward rest, shortening the refractory period (deexcitation) and carving out an excitable path. When S2 ends, the depolarization under the cathode diffuses into the newly excitable tissue (dashed arrows), launching a wave front that propagates initially in the fiber direction (solid arrows): break excitation. Only after the surrounding tissue recovers excitability does the wave front begin to rotate back, as if there were four critical points: quatrefoil reentry.

Russ Hobbie and I discuss break excitation in a homework problem in Chapter 7 of Intermediate Physics for Medicine and Biology.
Problem 48. During stimulation of cardiac tissue through a small anode, the tissue under the electrode and in the direction perpendicular to the myocardial fibers is hyperpolarized, and adjacent tissue on each side of the anode parallel to the fiber direction is depolarized. Imagine that just before this stimulus pulse is turned on the tissue is refractory. The hyperpolarization during the stimulus causes the tissue to become excitable. Following the end of the stimulus pulse, the depolarization along the fiber direction interacts electrotonically with the excitable tissue, initiating an action potential (break excitation). (This type of break excitation is very different than the break excitation analyzed on page 181.)
(a) Sketch pictures of the transmembrane potential distribution during the stimulus. Be sure to indicate the fiber direction, the location of the anode, the regions that are depolarized and hyperpolarized by the stimulus, and the direction of propagation of the resulting action potential.
(b) Repeat the analysis for break excitation caused by a cathode instead of an anode. For a hint, see Wikswo and Roth (2009).
Now we come to the main point of the communication; the reason I wrote it. Look at the first snapshot in the illustration above, the one labeled S1 that occurs just before the S2 stimulus. The tissue is all red. It is uniformly refractory. The S1 action potential has no gradient of refractoriness, yet reentry occurs. This is the counterexample that proves the point: a gradient of refractoriness is not essential.

The communication contains one figure, showing the results of a calculation based on the bidomain model. The time in milliseconds after S1 is in the upper right corner of each panel. S1 was applied uniformly to the entire tissue, so at 70 ms the refractoriness is uniform. The 80 ms frame is during S2. Subsequent frames show break excitation the development of reentry.

A figure based on Fig. 1 in “An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus” (IEEE Trans. Biomed. Eng., Volume 47, Pages 820–821, 2000). It is the same as the figure in the communication, except the color and quality are improved.
An illustration based on Fig. 1 in “An S1 Gradient of Refractoriness is Not Essential for Reentry Induction by an S2 Stimulus” (IEEE TBME, 47:820–821, 2000). It is the same as the figure in the communication, except the color and quality are improved.
The communication concludes:
My results support the growing realization that virtual electrodes, hyperpolarization, deexcitation, and break stimulation may be important during reentry induction [8], [9], [14], [15], [21]–[24]. An S1 gradient of refractoriness may underlie reentry induction in many cases [1]–[6], but this communication provides a counterexample demonstrating that an S1 gradient of refractoriness is not necessary in every case.
This is a nice calculation, but is it consistent with experiment? Look at Y. Cheng, V. Nikolski, and I. R. Efimov (2000) “Reversal of Repolarization Gradient Does Not Reverse the Chirality of Shock-Induced Reentry in the Rabbit Heart,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 998–1007. These researchers couldn’t produce uniform refractoriness, so they did the next best thing: repeated the experiment using S1 wave fronts propagating in different directions. They always obtained the same result, independent of the location and timing of the critical line of refractoriness.

Does this calculation mean the critical point hypothesis is wrong? No. See my paper with Natalia Trayanova and her student Annette Lindblom (“The Role of Virtual Electrodes in Arrhythmogenesis: Pinwheel Experiment Revisited,” Journal of Cardiovascular Electrophysiology, Volume 11, Pages 274-285, 2000) to examine how this view of reentry can be reconciled with the critical point hypothesis.

One of the best things about this calculation is that you don’t need a fancy computer to demonstrate that the S1 gradient of refracoriness is not essential; A simple cellular automata will do. The figure below sums it up (look here if you don’t understand).

A cellular automata demonstrating that an S1 gradient of refractoriness is not essential for reentry induction by an S2 stimulus.
A cellular automata demonstrating that an S1 gradient of refractoriness is not essential for reentry induction by an S2 stimulus.

Wednesday, April 29, 2020

The Toroid Illustration (Fig. 8.26)

In Chapter 8 (Biomagnetism) of Intermediate Physics for Medicine and Biology, Russ Hobbie and I show an illustration of a nerve axon threaded through a magnetic toroid to measure its magnetic field (Fig. 8.26).
Fig. 8.26. A nerve cell preparation is threaded through the magnetic
toroid to measure the magnetic field. The changing magnetic flux in
the toroid induces an electromotive force in the winding. Any external
current that flows through the hole in the toroid diminishes the magnetic field.
While this figure is clear and correct, I wonder if we could do better? I started with a figure of a toroidal coil from a paper I published with my PhD advisor John Wikswo and his postdoc Frans Gielen.
Gielen FLH, Roth BJ, Wikswo JP Jr (1986) Capabilities of a Toroid-Amplifier System for Magnetic Measurements of Current in Biological Tissue. IEEE Trans. Biomed. Eng. 33:910-921.
Starting with Figure 1 from that paper (you can find a copy of that figure in a previous post), I modified it to resemble Fig. 8.26, but with a three-dimensional appearance. I also added color. The result is shown below.

An axon (purple) is threaded through a toroid to measure the magnetic field.
The toroid has a ferrite core (green) that is wound with insulated copper wire
(blue). It is then sealed in a coating of epoxy (pink). The entire preparation
is submerged in a saline bath. The changing magnetic flux in the ferrite induces
an electromotive force in the winding. Any current in the bath that flows
through the hole in the toroid diminishes the magnetic field.
Do you like it?

To learn more about how I wound the wire and applied the epoxy coating, see my earlier post about The Magnetic Field of a Single Axon. The part about "any current in the bath that flows through the hole in the toroid diminishes the magnetic field" is described in more detail in my post about the Bubble Experiment.

Tuesday, April 28, 2020

Chernobyl Then and Now: A Global Perspective

Last year I was supposed to give a talk at Oakland University for a symposium about “Chernobyl Then and Now: A Global Perspective.” It was part of an exhibition at the OU Art Gallery titled “McMillan’s Chernobyl: An Intimation of the Way the World Would End.” My role at the symposium was to explain the factors that led to the explosion of the Chernobyl nuclear power plant. I was chosen by the organizer, OU Professor of Art History Claude Baillargeon, because I had taught a class about The Making of the Atomic Bomb in the Oakland’s Honors College.

Readers of Intermediate Physics for Medicine and Biology should become familiar with the Chernobyl disaster because it illustrates how exposure to radiation can affect people over different time scales, from short term acute radiation sickness to long-term radiation-induced cancer.

It turned out I could not attend the symposium. My friend Gene Surdutovich stepped in at the last minute to replace me, and because he is from Ukraine—where the disaster occurred—he provided more insight than I could have. However, I thought the readers of this blog might want to read a transcript of the talk I planned to present. It was supposed to be my “TED Talk,” aimed at a broad audience with limited scientific background. No Powerpoint, no blackboard; just a few balls and a pencil as props.
The nuclear reactor in Chernobyl had an inherently unstable design that led to the worst nuclear accident in history. To understand why the design was so unstable, we need to review some physics.

The nucleus of an atom contains protons and neutrons. The number of protons determines what element you have. For instance, a nucleus with 92 protons is uranium. The number of neutrons determines the isotope. If a nucleus has 92 protons and 146 neutrons it is uranium-238 (because 92 + 146 = 238). Uranium-238 is the most common isotope of uranium (about 99% of natural uranium is uranium-238). If the nucleus has three fewer neutrons, that is only 143 neutrons instead of 146, it’s uranium-235, a rare isotope of uranium (about 1% of natural uranium is uranium-235).

No stable isotopes of uranium exist, but both uranium-235 and uranium-238 have very long half-lives (a half-life is how long it takes for half the nuclei to decay). Their half-lives are several billion years, which is about the same as the age of the earth. So many of the atoms of uranium that originally formed with the earth have not decayed away yet, and still exist in our rocks. We can use them as nuclear fuel.

Although uranium-235 is the rarer of the two long-lived isotopes, it is the one that is the fuel for a nuclear reactor. The uranium-235 nucleus is “fissile” meaning that it is so close to being unstable that a single neutron can trigger it to break in two pieces, releasing energy and two additional neutrons. This is called nuclear fission.

A nuclear chain reaction can start with a lot of uranium-235 and a single neutron. The neutron causes a uranium-235 nucleus to fission, breaking into two pieces plus releasing two additional neutrons and energy. These two neutrons hit two other uranium-235 nuclei, causing each of them to fission, releasing a total of four neutrons plus more energy. These four neutrons hit four other uranium-235 nuclei, releasing eight neutrons….and so on. The atomic bomb dropped on Hiroshima at the end of World War Two was based on just such an uncontrolled uranium-235 chain reaction. Fortunately, there are ways to control the chain reaction, so it can be used for more peaceful purposes, such as a nuclear reactor.

One surprising feature of uranium-235 is that SLOW neutrons are more likely to split the nucleus than FAST neutrons. How this effect was discovered is an interesting story. Enrico Fermi, an Italian physicist, was studying nuclear reactions in the 1930s by bombarding different materials with neutrons. He observed more nuclear reactions if his apparatus sat on a wooden table top than if it sat on a marble table top! What? It turns out wood was better at slowing the neutrons than marble. Think how confusing this must have been for Fermi. He was so confused that he tried submerging the apparatus in a pond behind the physics building and the reactions increased even more!

A uranium-235 chain reaction triggered by neutrons works best with slow neutrons. Therefore, nuclear reactors need a “moderator”: a substance that slows the neutrons down. The moderator is the key to understanding what happened at Chernobyl.

The best moderators are materials whose nuclear mass is about the same as the mass of a neutron. If the nucleus was a lot heavier than the neutron, the neutron would not slow down after the collision. Imagine this tennis ball is the light neutron, and this big basketball is the heavy nucleus. When the neutron hits the nucleus, it just bounces off. It changes direction but doesn’t slow down. Now, imagine this neutron collides with a very light particle, represented by this ping pong ball. When the relatively heavy neutron hits the light particle, it will just push it out of the way like a train hitting a mosquito. The neutron itself won’t slow down much. To be effective at slowing the neutron down, the nucleus needs to be about the same mass as the neutron. What has a similar mass to a neutron? A proton. What nucleus contains a single proton? Hydrogen. Watch what happens when a neutron and a hydrogen nucleus collide? This ball is the neutron, and this ball is the proton: the hydrogen nucleus. Right after the collision, the neutron stops! It is like when a moving billiard ball slams into a stationary billiard ball; the one that was moving stops, and the one that was stationary starts moving. Interacting with hydrogen is a great way to slow down neutrons. Therefore, hydrogen is a great moderator. Where do you find a lot of hydrogen? Water (H2O). It was the hydrogen in the water of the wooden table top that was so effective at slowing Fermi’s neutrons. The water in the pond behind the physics building was even better; it had even more hydrogen.

Other elements that have relatively light nuclei are also good moderators, such as carbon (carbon’s nucleus has 6 protons and 6 neutrons). It’s somewhat heavier than you want in order to slow neutrons optimally, but it’s not bad, and its abundant, cheap, and dense. During the Manhattan Project, Fermi (who had fled fascist Italy and settled in the United States) built the first nuclear reactor in a squash court under the football stadium at the University of Chicago. His reactor was a “pile” of uranium balls, with each ball surrounded by blocks of graphite (almost pure carbon, like the lead in this pencil). The uranium was the fuel and the graphite was the moderator.

Before we talk more about moderators, you might be wondering why Fermi’s reactor didn’t explode, destroying Chicago? One reason was that his uranium was a mix of uranium-235 and uranium-238, and was in fact 99% uranium-238. The uranium-238 doesn’t contribute to the chain reaction; it’s not fissile. To make matters worse, uranium-238 can absorb a neutron and dampen the chain reaction. When uranium-238 captures a neutron to become uranium-239, it takes a neutron “out of action” so to speak. During the Manhattan Project the United States spent enormous amounts of time and money separating uranium-235 from uranium-238, so it could use almost pure uranium-235 in the atomic bomb. But Fermi didn’t have any such enriched uranium. Also, Fermi controlled his reactor using a super-duper neutron absorber, cadmium. Cadmium sucks up the neutrons, stopping the chain reaction. Fermi could push in or pull out cadmium control rods to keep the speed of the reaction “just right.” As an emergency backup he had one big cadmium control rod suspended over the reactor by a rope. One of Fermi’s assistants stood by with an axe. If things started to go out of control, his job was to cut the rope dropping the cadmium rod and stopping the reaction. Fortunately, Fermi took great pains to operate the reactor carefully, and no such problems occurred. Had things gone wrong, the reactor probably wouldn’t have exploded like a bomb. It would have just gotten very hot and melted, causing a “meltdown” with all sorts of radiation release, like at Chernobyl. It’s a scary thought because it was in the middle of Chicago, but we were at war against the Nazis, so people took some risks.

Now back to the moderator. Let’s consider three different moderators. First, “heavy water.” This is water containing a rare, heavy isotope of hydrogen, hydrogen-2 (its nucleus consists of one proton and one neutron). While it is not quite as good as hydrogen-1 at slowing down neutrons, it’s still very good, and it has one advantage. Hydrogen-1 (a single proton) can sometimes absorb a neutron to become hydrogen-2. It’s as if occasionally these two balls stick together when they hit. This capture of a neutron slows the chain reaction. Hydrogen-2, however, rarely absorbs a neutron to become hydrogen-3, so it’s a great moderator: it slows the neutrons without absorbing them. During World War Two, the Germans tried to construct a nuclear reactor using heavy water as the moderator. The problem was, heavy water is difficult and expensive to make. There was a plant in Norway that produced heavy water, and it was controlled by the Germans. The British sent in a commando raid that sabotaged the plant, causing all that precious heavy water to flow down the drain. Heavy water is so expensive it isn’t used nowadays in reactors, and we won’t discuss it anymore.

The second moderator we’ll consider is regular water made using hydrogen-1 (I’ll call it just “water” as opposed to “heavy water”). Nowadays most nuclear reactors in the United States use water as the moderator. They also use water as the coolant. You need a coolant to keep the reactor from getting too hot and melting. Also, the coolant is how you get the heat out of the reactor so you can use it to run a steam engine and generate power. So in the United States, water in a nuclear reactor has two purposes: it’s the moderator and the coolant. Suppose that the reactor, for some reason, gets too hot and the water starts boiling off. That will cause the moderator to boil away. No more moderator, no more slowing down the neutrons. No more slowing down the neutrons, no more chain reaction. This is a type of a negative feedback loop that makes the reactor inherently safe. It’s like the thermostat in your house: if the house gets too hot, the thermostat turns off the furnace, and the house cools down. Recall that hydrogen-1 can also absorb neutrons, and in theory that could cause the reactor to speed up when the water boils away because there is less neutron absorption. So neutron absorption and moderation are opposite effects. But a reduction of neutron absorption is less important than the disappearance of the moderator, so on the whole when water boils the reaction slows down. We say that the reactor has a “negative void coefficient.” The “void” means the water is boiling, forming bubbles. The “negative” means this negative feedback loop occurs, keeping the reaction from increasing out of control.

Now for the third moderator: carbon. The Russians built something called an RBMK reactor. This is a Russian acronym, so I won’t try to explain what the different letters mean. Suffice to say, an RBMK reactor is a nuclear reactor that uses carbon as the moderator. Chernobyl was an RBMK reactor. Like Fermi’s original reactor, the carbon was in the form of graphite. In addition, an RBMK reactor uses water as the coolant. Graphite is the moderator and water is the coolant. Now, suppose this type of reactor begins to heat up and the water starts to boil away. The hydrogen in the water is not the primary moderator, the carbon in the graphite is. So, the reaction doesn’t slow down when the water boils away; the carbon moderator is still there, slowing the neutrons. But remember, the hydrogen in water sometimes absorbs a neutron, taking it out of action. This neutron capture decreases as the water boils away, so the reaction increases. Increased heat causes water to boil, causing the reaction to speed up, causing increased heat, causing more water to boil, causing the reaction to speed up even more, causing yet more increased heat … This is a positive feedback loop; a vicious cycle. The reactor has a “positive void coefficient.” It’s as if the thermostat in your house was wired wrong, so when the house got hot the furnace turned ON, heating the house more. Normally the reactor is designed with all sorts of controls to prevent this positive feedback loop from taking off. For instance, control rods can be pushed in and out as needed. But, if for some reason these controls are not in place, the reactor will heat up dramatically and quickly, just as it did at Chernobyl.

Why do we have nuclear reactors? Nuclear reactors produce heat to power a steam engine, which in turn generates electricity. The steam needs to be at high pressure, so it can turn the turbine. Therefore, the reactor is in a pressure container. It’s like a pressure cooker. If the water boils too much, the pressure builds up until the container can’t handle it anymore and bursts, releasing steam. It’s a little like your whistling tea pot, except instead of whistling when the water boils, the reactor explodes. And unlike your teapot, the reactor releases radioactive elements along with the steam. You get a cloud of radioactivity.

Another problem with an RBMK reactor is that graphite burns. It’s pure carbon. It’s like coal. Once the pressure container bursts, oxygen can get in igniting the graphite, starting a fire. The graphite then spews radioactive smoke up into the atmosphere. Many of the people killed in the Chernobyl accident were firemen, trying to put out the fire.

Another issue, a little less important but worth mentioning, was the control rods. Chernobyl had control rods made out of boron, which like cadmium is an excellent neutron absorber. It vacuums up the neutrons and stops the chain reaction. The problem was, the control rods were tipped with graphite. As you push in a control rod, initially it would be like adding moderator, quickening the reaction. Only when the rod was completely pushed in would the boron absorb neutrons, slowing the reaction. So, the control rods would eventually suppress the chain reaction, but initially they made things worse. If, like at Chernobyl, a problem developed quickly, the control rods couldn’t keep up.

I won’t go in to all the comedy of errors that were the immediate cause of the accident at Chernobyl. The reactor was undergoing a test, and several of the controls were turned off. Some safeguards were still in place, but mistakes, poor communication, and ignorance prevented them from working. Whatever the immediate cause of the accident, the crucial point is that the reactor design itself was unstable. It’s like trying to balance this pencil on its tip. You can do it if you are careful and have some controls, but it’s inherently unstable. If you are not always vigilant, the pencil will fall over. The unstable design of the Chernobyl reactor made it a disaster waiting to happen.
If you would like to hear me give this talk (slightly modified), you can watch the YouTube video below. This winter I was teaching the second semester of introductory physics, and when the coronavirus pandemic arrived I had to switch to an online format. I recorded a lecture about Chernobyl when we were discussing nuclear energy.

My Chernobyl talk, given to my Introductory Physics class,
online from home because of Covid-19.