School of Communication Sciences and Disorders Auditory Cognitive Neuroscience Laboratory
ACNL Home
projects

The major goals of our research are to better understand neural basis of complex auditory perception and cognition and how they are changed with listening experience and training. Event-related brain potentials (ERPs) are used to record the electrical activity of the human nervous system and relate brain responses to behavioral performance. We are currently investigating how an individual's perceptual and cognitive abilities for speech and music manifest from their underlying brain activity. We are also examining how listening expertise and/or training (e.g., music lessons, bilingualism) influence an individual's auditory skills and how these benefits might transfer to improve seemingly unrelated cognitive abilities including memory and attention.

 

How does experience change the human brain?

We are investigating how experience and certain forms of training change the brain. Musicians have proved to be an exceptional model for studying auditory plasticity given their intense, long-term experience manipulating complex sound information. Our neuroimaging studies demonstrate experience-dependent tuning of the human auditory system with music engagement. Remarkably, musicians' enhancements in brain function are not restricted to music processing; our studies also reveal important benefits to speech and language functions as well as general , non-auditory cognitive abilities (e.g., aspects of memory).  We are exploring how music instruction and other forms of experience (e.g., bilingualism) could be used to strengthen speech/language skills and general cognitive abilities across the lifespan.

 

musicians_tuning

Psychophysical tuning curves (PTCs) reveal sharper cochlear tuning in musicians. (A) Estimates of tuning are two times sharper for forward compared to simultaneous masking. Relative to NMs, Ms demonstrate more selective (i.e., narrower) auditory filters, particularly at higher CFs (4 kHz). (B) Years of formal musical training predicts increased filter sharpness at 4 kHz measured via simultaneous masking; longer music experience is associated with higher Q10. Data from Bidelman et al. (2014). 

 

 

ffrs

Experience-dependent enhancement of brainstem responses resulting from musical training. (top left) Brainstem FFR time-waveforms elicited by a musical tone recorded in musician and nonmusician listeners (red and blue, respectively). (bottom left) Expanded time window around the onset of the brainstem response (≈ 17 ms). Relative to nonmusicians, musicians' responses are both larger and more temporally precise as evident by their more robust amplitude (top right) and shorter onset duration (bottom right). Musical training thus improves both the precision and magnitude of time-locked neural activity to complex sounds. Data from Bidelman et al. (2011). 

 
musicians_noise

Musical training improves speech-in-noise listening abilities. (A) Relative to their nonmusician peers, musically trained listeners are better at discriminating fine acoustic details of speech sounds in both clean and noisy environments. (B) Behavioral benefits for speech-in-noise listening are predicted by how well speech cues are represented in listeners' brain responses. Musicians show enhanced neural encoding and superior (i.e., lower) discrimination thresholds for detecting changes in speech acoustics. Data from Bidelman & Krishnan (2010).

 
Selected publications:
Bidelman, G. M., Schug, J. M., Jennings, S. G., & Bhagat, S. P. (2014). Psychophysical auditory filter estimates reveal sharper cochlear tuning in musicians. Journal of the Acoustical Society of America, 136(1), EL33-39.
 
 
Bidelman, G. M., Krishnan, A., & Gandour, J. T. (2011). Enhanced brainstem encoding predicts musicians’ perceptual advantages with pitch. European Journal of Neuroscience, 33(3), 530-538.
 
Bidelman, G. M., & Krishnan, A. (2010). Effects of reverberation on brainstem representation of speech in musicians and non-musicians. Brain Research, 1355, 112-125.

 

Cross-domain transfer effects between music and language experience?

 Recent neuroimaging studies suggest that some aspects of music and language are processed by shared brain regions. This overlap suggests the intriguing possibility that experience in one domain (e.g., music training) might transfer to benefit processing in the other domain (e.g., speech). Neurocognitive models suggest the reverse might also be true, i.e., intense language experience improving music listening skills. In both ERP and behavioral studies we are investigating how music and language experience (particularly tone languages) improve the neural processing of music and language signals. Our findings suggest that under some circumstances, transfer from these experiences to the other can be bidirectional (M->L and L->M).   

pitchtracking

Cross-domain transfer between music and language experience. Human brainstem responses were recorded in musicians, English-speaking nonmusicians, and speakers of a tone language (Mandarin Chinese). Relative to nonmusicians, brainstem pitch tracking was superior in both musicians and Chinese listeners in response to both musical and linguistic pitch patterns. These findings demonstrate an experience-dependent enhancement and transfer in pitch encoding regardless of the domain of expertise (music or language). Data from Bidelman et al (2011). 

 crossDomain

 Cross-domain transfer between music and language experience. Both musicians and tone language speakers (Cantonese) show superior auditory perception for pitch compared to English-speaking nonmusician controls. Cantonese listeners also show superior music perception performance compared to NMs indicating that tone-language experience can transfer to benefit music processing skills. Musician and tone-language bilinguals also show improved spatial working memory suggesting these two experiences also tune general cognitive abilities. Data from Bidelman et al (2013).

 Selected publications:
Bidelman, G. M., Hutka, S., & Moreno, S. (2013). Tone language speakers and musicians share enhanced perceptual and cognitive abilities for musical pitch: Evidence for bidirectionality between the domains of language and music. PloS One, 8(4), e60676.
 
Bidelman, G. M., Gandour, J. T., & Krishnan, A. (2011). Cross-domain effects of music and language experience on the representation of pitch in the human auditory brainstem. Journal of Cognitive Neuroscience, 23(2), 425-434.

Bidelman, G. M., Gandour, J. T., & Krishnan, A. (2011). Musicians and tone-language speakers share enhanced brainstem encoding but not perceptual benefits for musical pitch. Brain and Cognition, 77(1), 1-10.

 

  What is the neural basis for musical consonance, dissonance, and hierarchical arrangement of pitch in Western music?

  Why do certain musical pitch relationships sound more pleasant than others? Why have composers adopted the scales and tuning systems they have? The origins of musical consonance have long been debated since the time of Pythagoras. In as series of studies, we are investigating the neural basis of consonance and musical pitch hierarchy. We have found robust correlates of listener's behavioral preferences for musical chords and intervals in the brainstem and as low as the auditory nerve (AN). These findings suggest that certain perceptual attributes of musical pitch are present in the earliest (and pre-attentive) stages of neurophysiological processing.

consonance

 

Comparison between auditory nerve, human brainstem evoked potentials, and behavioral responses to musical intervals. (top left) AN responses correctly predict perceptual attributes of consonance, dissonance, and the hierarchical ordering of musical dyads. AN neural pitch salience is shown as a function of the number of semitones separating the interval’s lower and higher pitch over the span of an octave (i.e., 12 semitones). Consonant musical intervals (blue) tend to fall on or near peaks in neural pitch salience whereas dissonant intervals (red) tend to fall within trough regions, indicating more robust encoding for the former. Among intervals common to a single class (e.g., all consonant intervals), AN responses show differential encoding resulting in the hierarchical arrangement of pitch typically described by Western music theory (i.e., Un > Oct > P5, > P4, etc.). (top middle) Neural correlates of musical consonance observed in human brainstem responses. As in the AN, brainstem responses reveal stronger encoding of consonant relative to dissonant pitch relationships. (top right) Behavioral consonance ratings reported by human listeners. Dyads considered consonant according to music theory are preferred over those considered dissonant [minor 2nd (m2), tritone (TT), major 7th (M7)].  (bottom row) Auditory nerve (left) and brainstem (middle) responses similarly predict behavioral chordal sonority ratings (right) for the four most common triads in Western music. Chords considered consonant according to music theory (i.e., major, minor) elicit more robust subcortical responses and show an ordering expected by music practice (i.e., major > minor >> diminished > augmented). AN data from Bidelman and Heinz (2011); interval data from Bidelman and Krishnan (2009); chord data from (Bidelman and Krishnan, 2011). 

 

PORcd

 

 Musical consonance and dissonance are segregated topographically in superior temporal gyrus. (top) Average dipole locations across listeners for the 13 chromatic intervals. Consonant intervals (blue) tend to evoke activity clustered toward the anterolateral portion of Heschl’s gyrus whereas dissonant intervals (red) cluster posteromedially.  (bottom) Source waveforms extracted from left and right hemisphere dipoles. Pooling intervals within classes (inset bar plots), source strength for consonant intervals (solid lines) is stronger than dissonant intervals (dotted lines) in the right but not left hemisphere (i.e., RH: consonance > dissonance; LH: consonance = dissonance). Pooled across listeners, the degree of perceptual consonance across intervals is predicted by right but not left hemisphere source activity.

 

Selected publications:

Bidelman, G. M., Grall, J.(in press). Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex. NeuroImage.
 
Bidelman, G. M. (2013). The role of the auditory brainstem in processing musically-relevant pitch. Frontiers in Psychology, 4(264), 1-13.
 
Bidelman, G. M., & Heinz, M. G. (2011). Auditory-nerve responses predict pitch attributes related to musical consonance-dissonance for normal and impaired hearing. Journal of the Acoustical Society of America, 130(3), 1488-1502.
 
Bidelman, G. M., & Krishnan, A. (2009). Neural correlates of consonance, dissonance, and the hierarchy of musical pitch in the human brainstem. Journal of Neuroscience, 29(42), 13165-13171.
 

 

 

Text Only | Print | Got a Question? Ask TOM | Contact Us | Memphis, TN 38152 | 901/678-2000 | Copyright 2014 University of Memphis | Important Notice | Last Updated: 
Last Updated: 7/10/14