Connect with us

Local News

Study finds listeners without formal musical training can still track complex tonal structures and reveal how the brain processes context over time

Published

on

Rochester, New York – Listeners do not need years of piano lessons or formal music theory classes to make sense of complex music. New research suggests that the human brain is far more capable than previously believed when it comes to understanding the deeper structure of sound. Even people with no musical training can follow rich tonal patterns, offering new insight into how the brain naturally processes context.

That conclusion sits at the center of a new study from the University of Rochester, published in Psychological Science. The research challenges a long-standing debate in music cognition about whether formal training is necessary to grasp higher-order tonal structures—the large-scale harmonic framework that gives music direction, tension, and resolution.

For decades, many researchers assumed that understanding these structures required explicit learning. Terms like tonic, dominant, and cadence are typically taught in classrooms, often after years of practice. But the new findings suggest that everyday listening may be enough for the brain to quietly absorb these rules.

“Formal training in music—including music theory—fine-tunes the ear to pick up tonal patterns in music, like tonic, dominant, and cadences,” says Elise Piazza, an assistant professor in the Departments of Brain and Cognitive Sciences and Neuroscience and the senior author of the study. “But it turns out that with zero training, people are actually picking up on those structures just from listening to music over the lifespan.”

Music, like language, is built in layers. Individual notes form phrases, phrases build sections, and sections come together to create a complete piece. While these layers are easy to recognize for trained musicians, it has been less clear how people without training perceive them. Until now, few studies had directly compared how experts and novices process musical context at multiple timescales.

Read also: Penfield launches its 250th anniversary preparations with a public open house in the Local History Room

To explore that gap, the University of Rochester team designed a set of experiments that carefully controlled how much musical context listeners received. The study was co-led by Riesa Cassano-Coleman, a PhD candidate in brain and cognitive sciences, and Sarah Izen, a former postdoctoral researcher in the same field. Together, the researchers developed a novel approach that involved scrambling music in different ways to disrupt its structure.

The idea of context is central to daily life. Humans rely on it constantly to anticipate what comes next, whether crossing a busy street or following a conversation. In music, context is what creates emotional buildup. A film score, for example, depends on accumulated musical cues to signal danger, romance, or relief.

In the study, participants were asked to complete tasks that required them to rely on context, such as predicting future notes or remembering earlier ones. Surprisingly, people without musical training often performed just as well as trained musicians. Their responses suggested that they were using knowledge similar to music theory—without realizing it.

“Across a variety of tasks,” says Piazza, “nonmusicians performed similarly to musicians.”

The research consisted of four separate experiments focused on memory, prediction, event segmentation, and categorization. In each case, participants listened to excerpts from Tchaikovsky’s Album for the Young, a collection of piano pieces known for their clear tonal structure. The music was altered by scrambling it at different timescales, allowing the researchers to control how much harmonic context remained intact.

Read also: Greater Rochester Chamber opens applications for the 2026 MWBE Awards celebrating minority- and woman-owned business growth

One of the most revealing experiments focused on prediction. Participants heard musical sequences that had been scrambled in three distinct ways. In the 8B condition, eight bars of music were left intact, preserving a large amount of context. In the 2B condition, the music was scrambled every two bars. In the most disrupted version, 1B, the scrambling occurred every single bar.

After listening to each sequence, participants were asked to predict which musical measure should come next. As expected, accuracy improved as more context was preserved. What stood out, however, was that musicians and nonmusicians improved at nearly the same rate. Increased musical training did not reliably predict better overall performance.

The results suggest that both groups were integrating musical context in similar ways. Rather than relying on explicit knowledge, nonmusicians appeared to draw on patterns they had absorbed simply by listening to music throughout their lives.

The findings also connect music cognition to a growing body of research on language processing. In language studies, scientists often scramble words, sentences, or paragraphs to test how the brain handles disrupted context. These experiments have revealed that different brain areas specialize in processing short-term versus long-term context.

The University of Rochester study applies a similar idea to music, showing that the brain may treat musical structure much like linguistic structure. Both rely on prediction, memory, and the ability to integrate information over time.

“We know from cognitive science that context helps the brain forecast upcoming events, informing our next action,” Piazza explains. Prediction allows people to catch a ball, avoid obstacles on a sidewalk, or finish someone else’s sentence. Music, it turns out, may rely on the same basic mental tools.

“In the neuroscience of language, there are different brain areas in charge of considering context that is either very short or very long,” says Piazza. “This is an exciting new field that has potential for revealing how context processing changes across the lifespan and how it might interact with aging and cognitive decline.”

The study is among the first to examine this kind of context processing in music in such a systematic way. Its implications go beyond listening alone. The researchers note that musical performance places heavy demands on memory and movement, raising new questions about how trained musicians manage large-scale structure while playing.

“I think there is a lot of potential to look at, for example, how highly trained musicians are doing this while they play,” Piazza says. “A lot of musicians feel like they hold their memory of a piece in their fingers. What are the motor processes for having that whole context stored up as they play? This research could have broader implications about how the brain uses this sort of context.”

Taken together, the findings suggest that the human brain is remarkably skilled at learning patterns without instruction. Long before anyone studies music formally, their brain may already understand far more than expected—quietly tracking structure, predicting what comes next, and turning sound into meaning.

 

Continue Reading

Trending