Biological link discovered between music and speech

Neuroscientists have uncovered evidence of a deep biological link between human music and speech.

The Duke University team found that the musical scales most commonly used over the centuries are those that come closest to mimicking the physics of the human voice.

This could explain why minor chord music is seen as sad and major chord music as happy.

Dale Purves, a professor of neurobiology, found that sad or happy speech can be categorized in major and minor intervals, just as music can. And in a second study, Kamraan Gill found the most commonly used musical scales are based on the physics of the vocal tones humans produce.

“There is a strong biological basis to the aesthetics of sound,” Purves said. “Humans prefer tone combinations that are similar to those found in speech.”

The team collected a database of major and minor melodies from about 1,000 classical music compositions and more that 6,000 folk songs and analyzed their tonal qualities.

They also had 10 people speak a series of single words with 10 different vowel sounds in either excited or subdued voices, as well as short monologues.

They then compared the tones that distinguished the major and minor melodies with the tones of speech uttered in the different emotional states. They found the sound spectra of the speech tones could be sorted the same way as the music, with excited speech exhibiting more major musical intervals and subdued speech more minor ones.

Although there are literally millions of scales that could be used to divide the octave, most human music is based on scales comprised of only five to seven tones. The researchers found the popularity of musical scales could be predicted based on how well they match up with the series of harmonics characteristic of vowels in speech.

Though they only worked with western music and spoken English, they believe the findings are more widely applicable. Most of the frequency ratios of the chromatic musical scale can be found in the speech of a variety of languages. Their analysis included speakers of Mandarin Chinese, and this showed similar results.

The studies appear in the Journal of the Acoustical Society of America (JASA) andPLOS One.