Auditory plasticity and development: Disentangling the temporal dynamics of complex representation
Gabriella Mussachia, Ph.D.
Dr. Musacchia is a Post-Doctoral Fellow at the Infancy Studies Laboratory, Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ. Her research focuses on how the brain makes sense of the complex world around us in order to understand language and music.
Abstract:
Both speech and music perception rely on pitch transitions whose production is controlled over time. In everyday listening, the pitch and rhythm of speech and music blend together to create a seamless perception of pitch fluctuations organized in time. The organization of pitch in rhythm aids cognitive processing in both domains. An extensive body of research has shown that music training leads to brain plasticity such as enhanced perception and processing of pitch and rhythm, as well as spoken and written language. However, the neuronal mechanisms of pitch and rhythm are often the investigated separately and the interrelationship between tone and rhythm processing is only beginning to be explored.
In this study, we used EEG and ERP time-frequency analysis methods, developed in animal and human adult experiments, to evaluate the contribution of cortical oscillations to typical infant auditory development. By comparing the frequency composition of neural responses to auditory tone sequences, we are able to disentangle the temporal dynamics of pitch and rhythm representation. The results of this investigation suggest that there may be specific mechanisms for simultaneous processing of complex auditory representation.
We also advance the hypothesis that the peripheral pathways carrying largely segregated representations of tonal “content” (e.g. pitch) and “context” (e.g. rhythm) feed into separate thalamocortical systems that could integrate tone content within rhythm context in auditory cortex. This hypothesis provides a framework for generating testable predictions about the early stages of music processing in the human brain, and may also be useful for understanding how the neuronal “context” of rhythm can facilitate music-related plasticity. Because the thalamocortical circuit is shared in both speech and music processing, our data and hypothesis also provide a substantive explanation of why music lends itself to plasticity in speech encoding mechanisms.