Friday, February 21, 2014

Principles of Musical Acoustics

Principles of Musical Acoustics, by William Hartmann.
Principles of Musical Acoustics,
by William Hartmann.
In the 4th edition of Intermediate Physics for Medicine and Biology, Russ Hobbie and I added a new chapter (Chapter 13) about Sound and Ultrasound. This allows us to discuss acoustics and hearing; an interesting mix of physics and physiology. But one aspect of sound we don’t analyze is music. Yet, there is much physics in music. In a previous blog post, I talked about Oliver Sacks’ book Musicophilia, a fascinating story about the neurophysiology of music. Unfortunately, there wasn’t a lot of physics in that work.

Last year, William Hartmann of Michigan State University (where my daughter Kathy is now a graduate student) published a book that provides the missing physics: Principles of Musical Acoustics. The Preface begins
Musical acoustics is a scientific discipline that attempts to put the entire range of human musical activity under the microscope of science. Because science seeks understanding, the goal of musical acoustics is nothing less than to understand how music “works,” physically and psychologically. Accordingly, musical acoustics is multidisciplinary. At a minimum it requires input from physics, physiology, psychology, and several engineering technologies involved in the creation and reproduction of musical sound.
My favorite chapters in Hartmann’s book are Chapter 13 on Pitch, and Chapter 14 on Localization of Sound. Chapter 13 begins
Pitch is the psychological sensation of the highness or the lowness of a tone. Pitch is the basis of melody in music and of emotion in speech. Without pitch, music would consist only of rhythm and loudness. Without pitch, speech would be monotonic—robotic. As human beings, we have astonishingly keen perception of pitch. The principal physical correlate of the psychological sensation of pitch is the physical property of frequency, and our keen perception of pitch allows us to make fine discriminations along a frequency scale. Between 100 and 10,000 Hz we can discriminate more than 2,000 different frequencies!
That is two thousand different pitches within a factor of one hundred in the range of frequencies (over six octaves), meaning we can perceive pitches that differ in frequency by about 0.23 %.  A semitone in music (for example, the difference between a C and a C-sharp) is about 5.9 %. That's pretty good: twenty-five pitches within one semitone. No wonder we have to hire piano tuners.

Pitch is perceived by “place,” different locations in the cochlea (part of the inner ear) respond to different frequencies, and by “timing,” neurons spike in synchrony with the frequency of the sound. For complex sounds, there is also a “template” theory, in which we learn to associate a collection of frequencies with a particular pitch. The perception of pitch is not a simple process.

There are some interesting differences between pitch perception in hearing and color perception in vision. For instance, on a piano play a middle C (262 Hz) and the next E (330 Hz) a factor of 1.25 higher in frequency. What you hear is not a pure tone, but a mixture of frequencies—a chord (albeit a simple one). But if you mix red light (450 THz) and green light (563 THz, again a factor of 1.25 higher in frequency), what you see is yellow, indistinguishable by eye from a single frequency of about 520 THz. I find it interesting and odd that the eye and ear differ so much in their ability to perceive mixtures of frequencies. I suspect it has something to do with the eye needing to be able to form an image, so it does not have the luxury of allocating different locations on the retina to different frequencies. One the other hand, the cochlea does not form images, so it can distribute the frequency response over space to improve pitch discrimination. I suppose if we wanted to form detailed acoustic images with our ear, we would have to give up music.

Hartmann continues, emphasizing that pitch perception is not just physics.
Attempts to build a purely mechanistic theory for pitch perception, like the place theory or the timing theory, frequently encounter problems that point up the advantages of less mechanistic theories, like the template theory. Often, pitch seems to depend on the listener’s interpretation.
Both Sacks and Hartmann discuss the phenomena of absolute, or perfect, pitch (AP). Hartmann offers this observation, which I find amazing, suggesting that we should be training our first graders in pitch recognition.
Less than 1% of the population has AP, and it does not seem possible for adults to learn AP. By contrast, most people with musical skills have RP [relative pitch], and RP can be learned at any time in life. AP is qualitatively different from RP. Because AP tends to run in families, especially musical families, it used to be thought that AP is an inherited characteristic. Most of the modern research, however, indicates that AP is an acquired characteristic, but that it can only be acquired during a brief critical interval in one’s life—a phenomenon known as “imprinting.” Ages 5–6 seem to be the most important.
My sister (who has perfect pitch) and I both started piano lessons in early grade school. I guess she took those lessons more seriously than I did.

In Chapter 14 Hartmann addresses another issue: localization of sound. It is complex, and depends on differences in timing and loudness between the two ears.
The ability to localize the source of a sound is important to the survival of human beings and other animals. Although we regard sound localization as a common, natural ability, it is actually rather complicated. It involves a number of different physical, psychological, and physiological, processes. The processes are different depending on where the sound happens to be with respect to the your head. We begin with sound localization in the horizontal plane.”
Interestingly, localization of sound gets more difficult when echos are present, which has implications for the design of concert halls. He writes
A potential problem occurs when sounds are heard in a room, where the walls and other surfaces in the room lead to reflections. Because each reflection from a surface acts like a new source of sound, the problem of locating a sound in a room has been compared to finding a candle in a dark room where all the walls are entirely covered with mirrors. Sounds come in from all directions and it’s not immediately evident which direction is the direction of the original source.

The way that the human brain copes with the problem of reflections is to perform a localization calculation that gives different weight to localization cues that arrive at different times. Great weight is placed on the information in the onset of the sound. This information arrives directly from the source before the reflections have a chance to get to the listener. The direct sound leads to localization cues such as ILD [interaural level difference], ITD [interaural time difference], and spectral cues that accurately indicate the source position. The brain gives much less weight to the localization cues that arrive later. It has learned that they give unreliable information about the source location. This weighting of localization cues, in favor of the earliest cues, is called the precedence effect.
The enjoyment of music is a truly complicated event, involving much physics and physiology. The Principles of Musical Acoustics is a great place to start learning about it.

No comments:

Post a Comment