Honolulu Star-Advertiser

Thursday, December 12, 2024 81° Today's Paper


News

To tug hearts, music first must tickle the neurons

The other day, Paul Simon was rehearsing a favorite song: his own “Darling Lorraine,” about a love that starts hot but turns very cold. He found himself thinking about a three-note rhythmic pattern near the end, where Lorraine (spoiler alert) gets sick and dies.

“The song has that triplet going on underneath that pushes it along, and at a certain point I wanted it to stop because the story suddenly turns very serious,” Simon said in an interview.

“The stopping of sounds and rhythms,” he added, “it’s really important, because, you know, how can I miss you unless you’re gone? If you just keep the thing going like a loop, eventually it loses its power.”

An insight like this may seem purely subjective, far removed from anything a scientist could measure. But now some scientists are aiming to do just that, trying to understand and quantify what makes music expressive — what specific aspects make one version of, say, a Beethoven sonata convey more emotion than another.

The results are contributing to a greater understanding of how the brain works and of the importance of music in human development, communication and cognition, and even as a potential therapeutic tool.

Research is showing, for example, that our brains understand music not only as emotional diversion, but also as a form of motion and activity. The same areas of the brain that activate when we swing a golf club or sign our name also engage when we hear expressive moments in music. Brain regions associated with empathy are activated, too, even for listeners who are not musicians.

And what really communicates emotion may not be melody or rhythm, but moments when musicians make subtle changes to those musical patterns.

Daniel J. Levitin, director of the laboratory for music perception, cognition and expertise at McGill University in Montreal, began puzzling over musical expression in 2002, after hearing a live performance of one of his favorite pieces, Mozart’s Piano Concerto No. 27.

“It just left me flat,” Levitin, who wrote the best seller “This Is Your Brain on Music” (Dutton, 2006), recalled in a video describing the project. “I thought, well, how can that be? It’s got this beautiful set of notes. The composer wrote this beautiful piece. What is the pianist doing to mess this up?”

Before entering academia, Levitin worked in the recording industry, producing, engineering or consulting for Steely Dan, Blue Oyster Cult, the Grateful Dead, Santana, Eric Clapton and Stevie Wonder. He has played tenor saxophone with Mel Torme and Sting, and guitar with David Byrne. (He also performs around campus with a group called Diminished Faculties.)

After the Mozart mishap, Levitin and a graduate student, Anjali Bhatara, decided to try teasing apart some elements of musical expression in a rigorous scientific way.

He likened it to tasting two different pots de creme: “One has allspice and ginger and the other has vanilla. You know they taste different but you can’t isolate the ingredient.”

To decipher the contribution of different musical flavorings, they had Thomas Plaunt, chairman of McGill’s piano department, perform snatches of several Chopin nocturnes on a Disklavier, a piano with sensors under each key recording how long he held each note and how hard he struck each key (a measure of how loud each note sounded). The note-by-note data was useful because musicians rarely perform exactly the way the music is written on the page — rather, they add interpretation and personality to a piece by lingering on some notes and quickly releasing others, playing some louder, others softer.

The pianist’s recording became a blueprint, what researchers considered to be the 100 percent musical rendition. Then they started tinkering. A computer calculated the average loudness and length of each note Plaunt played. The researchers created a version using those average values so that the music sounded homogeneous and evenly paced, with every eighth note held for an identical amount of time, each quarter note precisely double the length of an eighth note.

They created other versions too: a 50 percent version, with note lengths and volume halfway between the mechanical average and the original, and versions at 25 percent, 75 percent, and even 125 percent and 150 percent, in which the pianist’s loud notes were even louder, his longest-held notes even longer.

Study subjects listened to them in random order, rating how emotional each sounded. Musicians and nonmusicians alike found the original pianist’s performance most emotional and the averaged version least emotional.

But it was not just changes in volume and timing that moved them. Versions with even more variation than the original, at 125 percent and 150 percent, did not strike listeners as more emotional.

“I think it means that the pianist is very experienced in using these expressive cues,” said Bhatara, now a postdoctoral researcher at the Universite Paris Descartes. “He’s using them at kind of an optimal level.”

And random versions with volume and note-length changes arbitrarily sprinkled throughout made almost no impression.

All of this makes perfect sense to Paul Simon.

“I find it fascinating that people recognize what the point of the original version is, that that’s their peak,” he said. “People like to feel the human element, but if it becomes excessive then I guess they edit it back. It’s gilding the lily, it’s too Rococo.”

THE ELEMENT OF SURPRISE

Say the cellist Yo-Yo Ma is playing a 12-minute sonata featuring a four-note melody that recurs several times. On the final repetition, the melody expands, to six notes.

“If I set it up right,” Ma said in an interview, “that is when the sun comes out. It’s like you’ve been under a cloud, and then you are looking once again at the vista and then the light is shining on the whole valley.”

But that happens, he said, only if he is restrained enough to save some exuberance and emphasis for that moment, so that by the time listeners see that musical sun they have not already “been to a disco and its light show” and been “blinded by cars driving at night with the headlights in your eyes.”

Levitin’s results suggest that the more surprising moments in a piece, the more emotion listeners perceive — if those moments seem logical in context.

“It’s deviation from a pattern,” Ma said. “A surprise is only a surprise when you know it departs from something.”

He cited Schubert’s E-Flat Trio for piano, violin and cello as an example. It goes from a “march theme that’s in minor and it breaks out into major, and it’s one of those goose-bump moments.”

The departure “could be something incredibly slight that means something huge, or it could be very large but that’s actually a fake-out,” Ma said.

The singer Bobby McFerrin, who visited Levitin’s lab and walked through several experiments, said in a video of that visit that “one of the things that I have found valuable to me in a performance, whether I’m performing or someone else is, is a certain element of naivete,” as if “as we’re performing we’re still discovering the music.”

In an interview, the singer Rosanne Cash said the experiments showed that beautiful compositions and technically skilled performers could do only so much. Emotion in music depends on human shading and imperfections, “bending notes in a certain way,” Cash said, “holding a note a little longer.”

She said she learned from her father, Johnny Cash, “that your style is a function of your limitations, more so than a function of your skills.”

“You’ve heard plenty of great, great singers that leave you cold,” she said. “They can do gymnastics, amazing things. If you have limitations as a singer, maybe you’re forced to find nuance in a way you don’t have to if you have a four-octave range.”

THE MUSICAL BRAIN

The brain processes musical nuance in many ways, it turns out. Edward W. Large, a music scientist at Florida Atlantic University, scanned the brains of people with and without experience playing music as they listened to two versions of a Chopin etude: one recorded by a pianist, the other stripped down to a literal version of what Chopin wrote, without human-induced variations in timing and dynamics.

During the original performance, brain areas linked to emotion activated much more than with the uninflected version, showing bursts of activity with each deviation in timing or volume.

So did the mirror neuron system, a set of brain regions previously shown to become engaged when a person watches someone doing an activity the observer knows how to do — dancers watching videos of dance, for example. But in Large’s study, mirror neuron regions flashed even in nonmusicians.

Maybe those regions, which include some language areas, are “tapping into empathy,” he said, “as though you’re feeling an emotion that is being conveyed by a performer on stage,” and the brain is mirroring those emotions.

Regions involved in motor activity, everything from knitting to sprinting, also lighted up with changes in timing and volume.

Anders Friberg, a music scientist at KTH Royal Institute of Technology in Sweden, found that the speed patterns of people’s natural movements — moving a hand from one place to another on a desk or jogging and slowing to stop — match tempo changes in music that listeners rate as most pleasing.

“We got the best-sounding music from the velocity curve of natural human gestures, compared to other curves of tempos not found in nature,” Friberg said. “These were quite subtle differences, and listeners were clearly distinguishing between them. And these were not expert listeners.”

The Levitin project found that musicians were more sensitive to changes in volume and timing than nonmusicians. That echoes research by Nina Kraus, a neurobiologist at Northwestern University in Illinois, which showed that musicians are better at hearing sound against background noise, and that their brains expend less energy detecting emotion in babies’ cries.

Separately, the Levitin team found that children with autism essentially rated each nocturne rendition equally emotional, finding the original no more emotionally expressive than the mechanical version. But in other research, the team found that children with autism could label music as happy, sad or scary, suggesting, Levitin said, that “their recognition of musical emotions may be intact without necessarily having those emotions evoked, and without them necessarily experiencing those emotions themselves.”

A MATTER OF TIME

The ability to keep time to music appears to be almost unique to humans — not counting Snowball the cockatoo, which dances in time to “Everybody,” by the Backstreet Boys, and became a YouTube sensation. Both the Levitin and the Large studies found that the timing of notes was more important than loudness or softness in people’s perceptions of emotion in music.

This may be a product of evolutionary adaptation, said Kraus, since “a nervous system that is sensitive and well tuned to timing differences would be a nervous system that, from an evolutionary standpoint, would be more likely to escape potential enemies, survive and make babies.”

Changes in the expected timing of a note might generate the emotional equivalent of “depth perception, where slightly different images going to your two eyes allows you to see depth,” said Joseph E. LeDoux, a neuroscientist at New York University.

And musical timing might relate to the importance of timing in speech. “The difference between a B and a P, for example, is a difference in the timing involved in producing the sound,” said Aniruddh D. Patel, a music scientist at the Neurosciences Institute in San Diego. “We don’t signal the difference between P and B by how loud it is.”

Michael Leonhart, who played trumpet and produced for Steely Dan, said he thought “the ears of most people have started to become less sensitive to dynamics” as music recordings crank up the volume and “the world has become a louder place.”

Subtle timing differences, on the other hand, are critical, Leonhart said, citing a triplet figure in the beginning of Steely Dan’s song “Josie.”

“The tendency is to start rushing it, to get excited,” Leonhart said. But the key is “to lay it back, don’t rush, make sure it’s not ahead of the snare drum. It changes the slingshot effect of where things snap and pop.”

Simon plays with timing constantly, surfing bar lines. He squeezes lyrics like “cinematographer” — six short notes — into the space of a two-syllable word, and will “land on a long word with a consonant at the end, so that you really hear the word,” he said. “My brain is working that way — it’s dividing up everything. I really have a certain sense of where the pocket of the groove is, and I know when you have to reinforce it and I know when you want to leave it.”

Musicians like Simon consider slight timing variations so crucial that they eschew the drum machines commonly used in recordings. Levitin says Stevie Wonder uses a drum machine because it has so many percussion voices, but inserts human-inflected alterations, essentially mistakes, so beats do not always line up perfectly.

And Geoff Emerick, a recording engineer for the Beatles, said: “Often when we were recording some of those Beatles rhythm tracks, there might be an error incorporated, and you would say, ‘That error sounds rather good,’ and we would actually elaborate on that.

“When everything is perfectly in time, the ear or mind tends to ignore it, much like a clock ticking in your bedroom — after a while you don’t hear it.”

© 2011 The New York Times Company

Comments are closed.