CTR Home Internal  Relations and Communications Home About CTR Publication Schedule CTR Archives

November 23, 2000 Brains interpret sound differently



by Sigalit Hoffman

Although sound is registered on both the right and the left sides of the brain, Montreal Neurological Institute researcher Robert Zatorre discovered that each side of the brain is responsible for interpreting different types of sound. In a speech he gave to Concordia’s Psychology Department last term, the former Concordia professor and native Argentinian presented evidence of how the right side of the brain interprets music while the left side interprets speech.

Zatorre tested this idea using positron emission tomography (PET) to measure regional changes in brain blood flow, an indication of brain activity. When subjects listened to sound, like music, that changed octaves but was constant in speed, blood flow increased in the right side of their brain. In contrast, when subjects listened to sound, like speech, that differed in speed but kept the same octave, blood flow increased in the left side of their brain.

Zatorre’s PhD student found more evidence for this hypothesis. The student measured the size of the right and left auditory cortices, the parts of the brain responsible for registering sound. She found that the left auditory cortex was larger, and that it was different in a very specific way. The brain is composed of both white and grey matter. The white matter is made up of myelin, and it coats nerves so that the nerves can send impulses faster. While the amount of grey matter in each auditory cortex was the same, the left side was larger because it had more white matter. This is consistent with the idea that the left side is better at distinguishing sound, like speech, that changes in speed.

Zatorre explains that the reason for the different expertise in each side of the brain arises from differences in the number and organization of neurons. He believes that the right side has more neurons. Since they are not coated with myelin, they can be closer to each other, and provide better sound resolution. The reverse is the case with the left side, making it less capable of sound resolution but excellent at registering changes in speed.

This model is known as the “spectral temporal tradeoff,” a term Zatorre coined. The idea behind the model came from the limitations of the spectrogram, an instrument that can either depict sound frequency or sound time through line representations. Zatorre reasoned that maybe this relative specialization exists in humans as well. “Maybe the relative specialization of the left and right auditory regions has to do with this kind of difference,” Zatorre said. “According to this idea, [there is] a tradeoff.”

In the beginning of his presentation, Zatorre lamented how little is known about the structure and the function of the auditory cortex. He noted that animal models offered little insight into the human auditory system, and that knowledge about human hearing lags way behind knowledge of the human visual system. “What we’re aiming at is to try and understand those functional features that are particularly relevant for the human brain,” Zatorre said.

“For that we have to ask the question, What are the most complex sounds that the human brain has evolved to process?” Since music and speech are uniquely aspects of human auditory function, Zatorre looked to them to shed insight into this complex and poorly understood aspect of human brain function.