Friday, April 25, 2014

Bernard Cohen and the Risk of Low Level Radiation

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I discuss the work of physicist Bernard Cohen (1924-2012). In Chapter 16 (Medical Use of X Rays), we describe the dangers of radon gas, and show a picture from a 1995 study by Cohen studying lung cancer mortality as a function of radon gas level (our Fig. 16.57). Interestingly, the mortality rate goes down as radon exposure increases: exactly the opposite of what you would expect if you believed radiation exposure from radon caused lung cancer. In this blog entry, I consider two questions: who is this Bernard Cohen, and how is his work perceived today?

Cohen was a professor of physics at the University of Pittsburgh. An obituary published by the university states
Bernard Cohen was born (1924) and raised in Pittsburgh, PA and did his undergraduate work at Case (now Case Western Reserve Univ.). After service as an engineering officer with the U.S. Navy in the Pacific and the China coast during World War II, he did graduate work in Physics at Carnegie Tech (now Carnegie-Mellon Univ.), receiving a Ph.D. in 1950 with a thesis on “Experimental Studies of High Energy Nuclear Reactions” – at that time “high energy” was up to 15 MeV. His next eight years were at Oak Ridge National Laboratory, and in 1958 he moved to the University of Pittsburgh where he spent the remainder of his career except for occasional leaves of absence. Until 1974, his research was on nuclear structure and nuclear reactions .… His nuclear physics research was recognized by receipt of the American Physical Society Tom Bonner Prize (1981), and his election as Chairman of the A.P.S. Division of Nuclear Physics (1974-75).

In the early 1970s, he began shifting his research away from basic physics into applied problems. Starting with trace element analysis utilizing nuclear scattering and proton and X-ray induced X-ray emission (PIXE and XRF) to solve various practical problems, and production of fluorine-18 for medical applications, he soon turned his principal attention to societal issues on energy and the environment. For this work he eventually received the Health Physics Society Distinguished Scientific Achievement Award, the American Nuclear Society Walter Zinn Award (contributions to nuclear power), Public Information Award, and Special Award (health impacts of low level radiation), and was elected to membership in National Academy of Engineering; he was also elected Chairman of the Am. Nuclear Society Division of Environmental Sciences (1980-81). His principal work was on hazards from plutonium toxicity, high level waste from nuclear power (his first papers on each of these drew over 1000 requests for reprints), low level radioactive waste, perspective on risks in our society, society’s willingness to spend money to avert various type risks, nuclear and non-nuclear targets for terrorism, health impacts of radon from uranium mining, radiation health impacts from coal burning, impacts of radioactivity dispersed in air (including protection from being indoors), in the ground, in rivers, and in oceans, cancer and genetic risks from low level radiation, discounting in assessment of future risks from buried radioactive waste, physics of the reactor meltdown accident, disposal of radioactivity in oceans, the iodine-129 problem, irradiation of foods, hazards from depleted uranium, assessment of Cold War radiation experiments on humans, etc.

In the mid-1980s, he became deeply involved in radon research, developing improved detection techniques and organizing surveys of radon levels in U.S. homes accompanied by questionnaires from which he determined correlation of radon levels with house characteristics, environmental factors, socioeconomic variables, geography, etc. These programs eventually included measurements in 350,000 U.S. homes. From these data and data collected by EPA and various State agencies, he compiled a data base of average radon levels in homes for 1600 U.S. counties and used it to test the linear-no threshold theory of radiation-induced cancer; he concluded that that theory fails badly, grossly over-estimating the risk from low level radiation. This finding was very controversial, and for 10 years after his retirement in 1994, he did research extending and refining his analysis and responding to criticisms.
Although he died two years ago, his University of Pittsburgh website is still maintained, and there you can find a list of many of his articles. The first in the list is the article from which our Fig. 16.57 comes from. I particularly like the 4th item in the list, his catalog of risks we face every day. You can find the key figure here. Anyone interested in risk assessment should have a look.

No doubt Cohen’s work is controversial. In IPMB, we cite one debate with Jay Lubin, including articles in the journal Health Physics with titles
Cohen, B. L. (1995) “Test of the Linear-No Threshold Theory of Radiation Carcinogenesis for Inhaled Radon Decay Products.”

Lubin, J. H. (1998) “On the Discrepancy Between Epidemiologic Studies in Individuals of Lung Cancer and Residential Radon and Cohen’s Ecologic Regression.”

Cohen, B. L. (1998) “Response to Lubin’s Proposed Explanations of the Discrepancy.”

Lubin, J. H. (1998) “Rejoinder: Cohen’s Response to ‘On the Discrepancy Between Epidemiologic Studies in Individuals of Lung Cancer and Residential Radon and Cohen’s Ecologic Regression.’”

Cohen, B. L. (1999) “Response to Lubin’s Rejoinder.”

Lubin, J. H. (1999) “Response to Cohen’s Comments on the Lubin Rejoinder.”
Who says science is boring!

What is the current opinion of Cohen’s work. As I see it, there are two issues to consider: 1) the validity of the specific radon study performed by Cohen, and 2) the general correctness of the linear-no threshold model for radiation risk. About Cohen’s study, here is what the World Health Organization had to say in a 2001 publication.
This disparity is striking, and it is not surprising that some researchers have accepted these data at face value, taking them either as evidence of a threshold dose for high-LET radiation, below which no effect is produced, or as evidence that exposure of the lung to relatively high levels of natural background radiation reduces the risk for lung cancer due to other causes. To those with experience in interpreting epidemiological observations, however, neither conclusion can be accepted (Doll, 1998). Cohen’s geographical correlation study has intrinsic methodological difficulties (Stidley and Samet, 1993, 1994) which hamper any interpretation as to causality or lack of causality (Cohen, 1998; Lubin, 1998a,b; Smith et al., 1998; BEIR VI). The probable explanation for the correlation is uncontrolled confounding by cigarette smoking and inadequate assessment of the exposure of a mobile population such as that of the USA.
Needless to say, Cohen did not accept these conclusions. Honestly, I have not looked closely enough into the details of this particular study to provide any of my own insights.

On the larger question of the validity of the linear no-threshold model, I am a bit of a skeptic, but I realize the jury is still out. I have discussed the linear no-threshold model before in this blog here, here, here, and here. The bottom line is shown in our Fig. 16.58, which plots relative risk versus radon concentration for low doses of radiation; the error bars are so large that the data could be said to be consistent with almost any model. It is devilishly hard to get data about very low dose radiation effects.

Right or wrong, you have to admire Bernard Cohen. He made many contributions throughout his long and successful career, and he defended his opinions about low-level radiation risk with courage and spunk. (And, as the 70th anniversary of D-Day approaches, we should all honor his service in World War II). If you want to learn more about Cohen, see his Health Physics Society obituary here, another obituary here, and an interview about nuclear energy here. For those of you who want to hear it straight from the horse’s mouth, you can watch and listen to Cohen's own words in these videos.


Friday, April 18, 2014

The Periodic Table in IPMB

The periodic table of the elements summarizes so much of science, and chemistry in particular. Of course, the periodic table is crucial in biology and medicine. How many of the over one hundred elements do Russ Hobbie and I mention in the 4th edition of Intermediate Physics for Medicine and Biology? Surveying all the elements is too big of a job for one blog entry, so let me consider just the first twenty elements: hydrogen through calcium. How many of these appear in IPMB?
1. Hydrogen. Hydrogen appears many places in IPMB, including Chapter 14 (Atoms and Light) that describes the hydrogen energy levels and emission spectrum.

2. Helium. Liquid helium is mentioned when describing SQUID magnetometers in Chapter 8 (Biomagnetism), and the alpha particle (a helium nucleus) plays a major role in Chapter 17 (Nuclear Physics and Nuclear Medicine).

3. Lithium. Chapter 7 (The Exterior Potential and the Electrocardiogram) mentions lithium-iodide battery that powers most pacemakers, and Chapter 16 (Medical Use of X-rays) mentions lithium-drifted germanium x-ray detectors.

4. Beryllium. I can’t find beryllium anywhere in IPMB.

5. Boron. Boron neutron capture therapy is reviewed in Chapter 16 (Medical Use of X Rays).

6. Carbon. A feedback loop relating the carbon dioxide concentration in the alveoli to the breathing rate is analyzed in Chapter 10 (Feedback and Control).

7. Nitrogen. When working problems about the atmosphere, readers are instructed to consider the atmosphere to be pure nitrogen (rather than only 80% nitrogen) in Chapter 3 (Systems of Many Particles).

8. Oxygen. Oxygen is often mentioned when discussing hemoglobin, such as in Chapter 18 (Magnetic Resonance Imaging) when describing functional MRI.

9. Fluorine. The isotope Fluorine-18, a positron emitter, is used in positron emission tomography (Chapter 17, Nuclear Physics and Nuclear Medicine).

10. Neon. Not present.

11. Sodium. Sodium and sodium channels are essential for firing action potentials in nerves (Chapter 6, Impulses in Nerve and Muscle Cells).

12. Magnesium. Russ and I don’t mention magnesium by name. However, Problem 16 in Chapter 9 (Electricity and Magnetism at the Cellular Level) provides a citation for the mechanism of anomalous rectification in a potassium channel. The mechanism is block by magnesium ions.

13. Aluminum. Chapter 16 (Medical Use of X Rays) tells how sheets of aluminum are used to filter x-ray beams; removing the low-energy photons while passing the high-energy ones.

14. Silicon. Silicon X ray detectors are considered in Chapter 16 (Medical Use of X Rays).

15. Phosphorus. The section on Distances and Sizes that starts Chapter 1 (Mechanics) considers the molecule adenosine triphosphate (ATP), which is crucial for metabolism.

16. Sulfur. The isotope technitium-99m is often combined with colloidal sulfur for use in nuclear medicine imaging (Chapter 17, Nuclear Physics and Nuclear Medicine).

17. Chlorine. Ion channels are described in Chapter 9 (Electricity and Magnetism at the Cellular Level), including chloride ion channels.

18. Argon. In Problem 32 of Chapter 16 (Medical Use of X rays), we ask the reader to calculate the stopping power of electrons in argon.

19. Potassium. The selectivity and voltage dependence of ion channels have been studied using the Shaker potassium ion channel (Chapter 9, Electricity and Magnetism at the Cellular Level).

20. Calcium. After discussing diffusion in Chapter 4 (Transport in an Infinite Medium), in Problem 23 we ask the reader to analyze calcium diffusion when a calcium buffer is present.

Friday, April 11, 2014

Bilinear Interpolation

If you know the value of a variable at a regular array of points (xi,yj), you can estimate its value at intermediate positions (x,y) using an interpolation function. For bilinear interpolation, the function f(x,y) is

f(x,y) = a + b x + c y + d x y

where a, b, c, and d are constants. You can determine these constants by requiring that f(x,y) is equal to the known data at points (xi,yj), (xi+1,yj), (xi,yj+1), and (xi+1,yj+1):

f(xi,yj) = a + b xi + c yj + d xi yj
f(xi+1,yj) = a + b xi+1 + c yj + d xi+1 yj
f(xi,yj+1) = a + b xi + c yj+1 + d xi yj+1
f(xi+1,yj+1) = a + b xi+1 + c yj+1 + d xi+1 yj+1 .

Solving these four equations for the four unknowns a, b, c, and d, plugging those values into the equation for f(x,y), and then doing a bit of algebra gives you

f(x,y) = [ f(xi,yj)  (xi+1 – x) (yj+1 – y) + f(xi+1,yj) (x – xi) (yj+1 – y) 
                            + f(xi,yj+1) (xi+1 – x) (y – yj) + f(xi+1,yj+1) (x – xi) (y – yj) ] /(ΔxΔy)

where xi+1 = xi + Δx and yj+1 = yj + Δy. To see why this makes sense, let x = xi and y = yj. In that case, the last three terms in this expression go to zero, and the first term reduces to f(xi,yj), just as you would want an interpolation function to behave. As you can check for yourself, this is true of all four data points. If you hold y fixed then the function is a linear function of x, and if you hold x fixed then the function is a linear function of y. If you assume y = e x, then the function is quadratic.

If you want to try it yourself, see http://www.ajdesigner.com/phpinterpolation/bilinear_interpolation_equation.php

In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I introduce bilinear interpolation in Problem 20 of Chapter 12, in the context of computed tomography. In CT, you obtain the Fourier transform of the image at points in a polar coordinate grid ki, θj. In other words, the points lie on concentric circles in the spatial frequency plane, each of radius ki. In order to compute a numerical two-dimensional Fourier reconstruction to recover the image, one needs the Fourier transform on a Cartesian grid kx,n, ky,m. Thus, one needs to interpolate from data at ki, θj to kx,n, ky,m. In Problem 20, we suggest doing this using bilinear interpolation, and ask the reader to perform a numerical example.

I like bilinear interpolation, because it is simple, intuitive, and often “good enough.” But it is not necessarily the best way to proceed. Tomogrphic methods arise not only in CT but also in synthetic aperture radar (SAR) (see: Munson, D. C., J. D. O’Brien, and W. K. Jenkins (1983) “A Tomographic Formulation of Spotlight-Mode Synthetic Aperture Radar,” Proceedings of the IEEE, Volume 71, Pages 917–925). In their conference proceeding paper “A Comparison of Algorithms for Polar-to-Cartesian Interpolation in Spotlight Mode SAR” (IEEE International Conference on Acoustics, Speech and Signal Processing '85, Volume 10, Pages 1364–1367, 1985), Munson et al. write
Given the polar Fourier samples, one method of image reconstruction is to interpolate these samples to a cartesian grid, apply a 2-D inverse FFT, and to then display the magnitude of the result. The polar-to-cartesian interpolation operation must be of extremely high quality to prevent aliasing . . . In an actual system implementation the interpolation operation may be much more computationally expensive than the FFT. Thus, a problem of considerable importance is the design of algorithms for polar-to-cartesian interpolation that provide a desirable quality/computational complexity tradeoff.
Along the same lines, O’Sullivan (“A Fast Sinc Function Gridding Algorithm for Fourier Inversion in Computer Tomography,” IEEE Trans. Medical Imaging, Volume 4, Pages 200–207, 1985) writes
Application of Fourier transform reconstruction methods is limited by the perceived difficulty of interpolation from the measured polar or other grid to the Cartesian grid required for efficient computation of the Fourier transform. Various interpolation schemes have been considered, such as nearest-neighbor, bilinear interpolation, and truncated sinc function FIR interpolators [3]-[5]. In all cases there is a tradeoff between the computational effort required for the interpolation and the level of artifacts in the final image produced by faulty interpolation.
There has been considerable study of this problem. For instance, see
Stark et al. (1981) “Direct Fourier reconstruction in computer tomography,” IEEE Trans. Acoustics, Speech, and Signal Processing, Volume 29, Pages 237–245.

Moraski, K. J. and D. C. Munson (1991) “Fast tomographic reconstruction using chirp-z interpolation,” 1991 Conference Record of the Twenty-Fifth Asilomar Conference on Signals, Systems and Computers, Volume 2, Pages 1052–1056.
Going into the details of this topic would take me into more deeply into signal processing than I am comfortable with. Hopefully, Problem 20 in IPMB will give you a flavor for what sort of interpolation needs to be done, and the references given in this blog entry can provide an entry to more detailed analysis.

Friday, April 4, 2014

17 Equations that Changed the World

In Pursuit of the Unknown: 17 Equations that Changed the World, by Ian Stewart, superimposed on Intermediate Physics for Medicine and Biology.
In Pursuit of the Unknown:
17 Equations that Changed the World,
by Ian Stewart.
Ian Stewart’s book In Pursuit of the Unknown: 17 Equations that Changed the World “is the story of the ascent of humanity, told through 17 equations.” Of course, my first thought was “I wonder how many of those equations are in the 4th edition of Intermediate Physics for Medicine and Biology?” Let’s see.
1. Pythagorean theorem: a2+b2=c2. In Appendix B of IPMB, Russ Hobbie and I discuss vectors, and quote Pythagoras’ theorem when relating a vector’s x and y components to its magnitude.

2. Logarithms: log(xy)=log(x)+log(y). In Appendix C, we present many of the properties of logarithms, including this sum/product rule as Eq. C6. Log-log plots are discussed extensively in Chapter 2 (Exponential Growth and Decay).

3. Definition of the derivative: df/dt = limit h → 0 (f(t+h)-f(t))/h. We assume the reader has taken introductory calculus (the preface states “Calculus is used without apology”), so we don’t define the derivative or consider what it means to take a limit. However, in Appendix D we present the Taylor series through its first two terms, which is essentially the same equation as the definition of the derivative, just rearranged.

4. Newton’s law of gravity: F = Gm1m2/d2. Russ and I are ruthless about focusing exclusively on physics that has implications for biology and medicine. Almost all organisms live at the surface of the earth. Therefore, we discuss the acceleration of gravity, g, starting in Chapter 1 (Mechanics), but not Newton’s law of Gravity.

5. The square root of minus one: i2 = -1. Russ and I generally avoid complex numbers, but they are mentioned in Chapter 11 (The Method of Least Squares and Signal Analysis) as an alternative way to formulate the Fourier series. We write the equation as i = √-1, which is the same thing as i2 = -1.

6. Euler’s formula for polyhedra: FE + V = 2. We never come close to mentioning it.

7. Normal distribution: P(x) = 1/√(2πσ) exp[-(x-μ)2/2σ2]. Appendix I is about the Gaussian (or normal) probability distribution, which is introduced in Eq. I.4.

8. Wave equation: 2u/∂t2 = c22u/∂x2. Russ and I introduce the wave equation (Eq. 13.5) in Chapter 13 (Sound and Ultrasound).

9. Fourier transform: f(k) = ∫ f(x) e-2πixk dx. In Chapter 11 (The Method of Least Squares and Signal Analysis) we develop the Fourier transform in detail (Eq. 11.57), and then use it in Chapter 12 (Images) to do tomography.

10. Navier-Stokes equation: ρ (∂v/∂t + v ⋅∇ v) = -∇ p + ∇ ⋅ T + f. Russ and I analyze biological fluid mechanics in Chapter 1 (Mechanics), and write down a simplified version of the Navier-Stokes equation in Problem 28.

11. Maxwell’s equations: ∇ ⋅ E = 0, ∇ × E = -1/c H/∂t, ∇ ⋅ H = 0, and ∇ × H = 1/c E/∂t. Chapter 6 (Impulses in Nerve and Muscle Cells), Chapter 7 (The Exterior Potential and the Electrocardiogram), and Chapter 8 (Biomagnetism) discuss each of Maxwell’s equations. In Problem 22 of Chapter 8, Russ and I ask the reader to collect all these equations together. Yes, I own a tee shirt with Maxwell’s equations on it.

12. Second law of thermodynamics: dS ≥ 0. In Chapter 3 (Systems of Many Particles), Russ and I discuss the second law of thermodynamics. We derive entropy from statistical considerations (I would have chosen S = kB lnΩ rather than dS ≥ 0 to sum up the second law). We state in words “the total entropy remains the same or increases,” although we don’t actually write dS ≥ 0.

13. Relativity: E = mc2. We don’t discuss special relativity in much detail, but we do need E = mc2 occasionally, most notably when discussing pair production in Chapter 15 (Interaction of Photons and Charged Particles with Matter).

14. Schrödinger’s equation: i ħ ∂Ψ/∂t = Ĥ Ψ. Russ and I don’t write down or analyze Schrödinger’s equation, but we do mention it by name, particularly at the start of Chapter 3 (Systems of Many Particles).

15. Information theory: H = - Σ p(x) log p(x). Not mentioned whatsoever.

16. Chaos theory: xi+1 = k xi (1-xi). Russ and I analyze chaotic behavior in Chapter 10 (Feedback and Control), including the logistic map xi+1=kxi(1-xi) (Eq. 10.36).

17. Black-Scholes equation: ½ σ2S22V/∂S2 + rS V/S + V/t – rV = 0. Never heard of it. Something about economics and the 2008 financial crash. Nothing about it in IPMB.
Seventeen is a strange number of equations to select (a medium sized prime number). If I were to round it out to twenty, then I would have three to select on my own. My first thought is Newton’s second law, F=ma, but Stewart mentions that this relationship underlies both the Navier-Stokes equation and the wave equation, so I guess it is already present implicitly. Here are my three:
18. Exponential equation with constant input: dy/dt = a – by. Chapter 2 of IPMB (Exponential Growth and Decay) is dedicated to the exponential function. This equation appears over and over throughout the book. Stewart discusses the exponential function briefly in his chapter on logarithms, but I am inclined to add the differential equation leading to the exponential function to the list. Among its many uses, this function is crucial for understanding the decay of radioactive isotopes in Chapter 17 (Nuclear Physics and Nuclear Medicine).

19. Diffusion equation: ∂C/∂t = D ∂2C/∂x2. To his credit, Stewart introduces the diffusion equation in his chapter on the Fourier transform, and indeed it was Fourier’s study of the heat equation (the same as the diffusion equation, with T for temperature replacing C for concentration) that motivated the development of the Fourier series. Nevertheless, the diffusion equation is so central to biology, and discussed in such detail in Chapter 4 (Transport in an Infinite Medium) of IPMB, that I had to include it. Some may argue that if we include both the wave equation and the diffusion equation, we also should add Laplace’s equation, but I consider that a special case of Maxwell’s equations, so it is already in the list.

20. Light quanta: E = hν: Although Stewart included Schrodinger’s equation of quantum mechanics, I would include this second equation containing Planck’s constant h. It summarizes the wave-particle duality of light, and is crucially important in Chapters 14 (Atoms and Light), 15 (Interaction of Photons and Charged Particles with Matter), and 16 (Medical Uses of X Rays).
Runners up include the Bloch equations since I need something from Chapter 18 (Magnetic Resonance Imaging), the Boltzmann factor (except that it is a factor, not an equation), Stokes law, the ideal gas law and its analog the van’t Hoff’s law from Chapter 5 (Transport through Neutral Membranes), the Hodgkin and Huxley equations, the Poisson-Boltzmann equation in Chapter 9 (Electricity and Magnetism at the Cellular Level), the Poisson probability distribution, and Planck’s blackbody radiation law (perhaps in place of E=hν).

Overall, I think studying the 4th edition of Intermediate Physics for Medicine and Biology introduces the reader to most of the critical equations that have indeed changed the world.