Christopher Plack - Academia.edu (original) (raw)
Papers by Christopher Plack
Journal of the Association for Research in Otolaryngology, 2013
The neural mechanisms of pitch coding have been debated for more than a century. The two main mec... more The neural mechanisms of pitch coding have been debated for more than a century. The two main mechanisms are coding based on the profiles of neural firing rates across auditory nerve fibers with different characteristic frequencies (place-rate coding), and coding based on the phase-locked temporal pattern of neural firing (temporal coding). Phase locking precision can be partly assessed by recording the frequency-following response (FFR), a scalp-recorded electrophysiological response that reflects synchronous activity in subcortical neurons. Although features of the FFR have been widely used as indices of pitch coding acuity, only a handful of studies have directly investigated the relation between the FFR and behavioral pitch judgments. Furthermore, the contribution of degraded neural synchrony (as indexed by the FFR) to the pitch perception impairments of older listeners and those with hearing loss is not well known. Here, the relation between the FFR and pure-tone frequency disc...
Hearing Research, 2020
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise expo... more Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. Because CS affects mainly low-spontaneous rate auditory nerve fibers, differential electrophysiological measures such as the ratio of the amplitude of wave I of the auditory brainstem response (ABR) at high to low click levels (WI H /WI L), and the difference between frequency following response (FFR) levels to shallow and deep amplitude modulated tones (FFR S-FFR D), have been proposed as CS markers. However, age-related audiometric threshold shifts, particularly prominent at high frequencies, may confound the interpretation of these measures in cross-sectional studies of age-related CS. To address this issue, we measured WI H /WI L and FFR S-FFR D using highpass masking (HP) noise to eliminate the contribution of high-frequency cochlear regions to the responses in a crosssectional sample of 102 subjects (34 young, 34 middle-aged, 34 elderly). WI H /WI L in the presence of the HP noise did not decrease as a function of age. However, in the absence of HP noise, WI H /WI L showed credible age-related decreases even after partialing out the effects of audiometric threshold shifts. No credible age-related decreases of FFR S-FFR D were found. Overall, the results do not provide evidence of age-related CS in the low-frequency region where the responses were restricted by the HP noise, but are consistent with the presence of age-related CS in higher frequency regions.
Hearing Research, 2021
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise expo... more Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. It is not known to what extent age-related CS occurs in humans, and how it affects the coding of supra-threshold sounds and speech in noise. Because in rodents CS affects mainly lowand medium-spontaneous rate (L/M-SR) auditory nerve fibers with rate-level functions covering medium-high levels, it should lead to greater deficits in the processing of sounds at high than at low stimulus levels. In this cross-sectional study the performance of 102 listeners across the age range (34 young, 34 middle-aged, 34 older) was assessed in a set of psychophysical temporal processing and speech reception in noise tests at both low, and high stimulus levels. Mixed-effect multiple regression models were used to estimate the effects of age while partialing out effects of audiometric thresholds, lifetime noise exposure, cognitive abilities (assessed with additional tests), and musical experience. Age was independently associated with performance deficits on several tests. However, only for one out of 13 tests were age effects credibly larger at the high compared to the low stimulus level. Overall these results do not provide much evidence that age-related CS, to the extent to which it may occur in humans according to the rodent model of greater L/M-SR synaptic loss, has substantial effects on psychophysical measures of auditory temporal processing or on speech reception in noise.
Hearing Research, 2019
The relative importance of neural temporal and place coding in auditory perception is still a mat... more The relative importance of neural temporal and place coding in auditory perception is still a matter of much debate. The current article is a compilation of viewpoints from leading auditory psychophysicists and physiologists regarding the upper frequency limit for the use of neural phase locking to code temporal fine structure in humans. While phase locking is used for binaural processing up to about 1500 Hz, there is disagreement regarding the use of monaural phase-locking information at higher frequencies. Estimates of the general upper limit proposed by the contributors range from 1500 to 10000 Hz. The arguments depend on whether or not phase locking is needed to explain psychophysical discrimination performance at frequencies above 1500 Hz, and whether or not the phase-locked neural representation is sufficiently robust at these frequencies to provide useable information. The contributors suggest key experiments that may help to resolve this issue, and experimental findings that may cause them to change their minds. This issue is of crucial importance to our understanding of the neural basis of auditory perception in general, and of pitch perception in particular. Keywords Phase locking; temporal fine structure; temporal coding; place coding; pitch Highlights Phase locking is used in binaural processing for frequencies up to ~1500 Hz. Estimates of the general upper limit (inc. monaural processing) vary from 1500 to 10000 Hz. Direct recordings from human auditory nerve would determine peripheral limitation. 3 Understanding of the central processing of temporal and place cues is needed to establish an upper limit.
The Journal of the Acoustical Society of America, 2016
Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very li... more Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very little attention has been given to auditory neuropathic complications in DM. The aim of this study was to determine whether type 1 DM (T1DM) affects neural coding of the rapid temporal fluctuations of sounds, and how any deficits may impact on behavioral performance. Design: Participants were 30 young normal-hearing T1DM patients, and 30 age-, sex-, and audiogram-matched healthy controls. Measurements included: electrophysiological measures of auditory nerve and brainstem function using the click-evoked auditory brainstem response (ABR), and of brainstem neural temporal coding using the sustained frequency-following response (FFR); behavioral tests of temporal coding (interaural phase difference, IPD, discrimination and the frequency difference limen, FDL); tests of speech perception in noise; and self-report measures of auditory disability measures using the Speech, Spatial and Qualities (SSQ) hearing scale. Results: There were no significant differences between T1DM patients and controls in the ABR. However, the T1DM group showed significantly reduced FFRs to both temporal envelope and temporal fine structure. The T1DM group also showed significantly higher IPD and FDL thresholds, worse speech-in-noise performance, as well as lower overall SSQ scores than the control group. Conclusions: These findings suggest that T1DM is associated with degraded neural temporal coding in the brainstem in the absence of an elevation in audiometric threshold, and that the FFR may provide an early indicator of neural damage in T1DM, before any abnormalities can be identified using standard clinical tests. However, the relation between the neural deficits and the behavioral deficits is uncertain.
The Journal of the Acoustical Society of America, 2017
Old, hearing-impaired listeners generally benefit little from lateral separation of multiple talk... more Old, hearing-impaired listeners generally benefit little from lateral separation of multiple talkers when listening to one of them. This study aimed to determine how spatial release from masking (SRM) in such listeners is affected when the interaural time differences (ITDs) in the temporal fine structure (TFS) are manipulated by tone-vocoding (TVC) at the ears by a master hearing aid system. Word recall was compared, with and without TVC, when target and masker sentences from a closed set were played simultaneously from the front loudspeaker (co-located) and when the maskers were played 45° to the left and right of the listener (separated). For 20 hearing-impaired listeners aged 64 to 86, SRM was 3.7 dB smaller with TVC than without TVC. This difference in SRM correlated with mean audiometric thresholds below 1.5 kHz, even when monaural TFS sensitivity (discrimination of frequency-shifts in identically filtered complexes) was partialed out, suggesting that low-frequency audiometric ...
Hearing research, Jan 2, 2016
Noise-induced cochlear synaptopathy has been demonstrated in numerous rodent studies. In these an... more Noise-induced cochlear synaptopathy has been demonstrated in numerous rodent studies. In these animal models, the disorder is characterized by a reduction in amplitude of wave I of the auditory brainstem response (ABR) to high-level stimuli, whereas the response at threshold is unaffected. The aim of the present study was to determine if this disorder is prevalent in young adult humans with normal audiometric hearing. One hundred and twenty six participants (75 females) aged 18-36 were tested. Participants had a wide range of lifetime noise exposures as estimated by a structured interview. Audiometric thresholds did not differ across noise exposures up to 8 kHz, although 16-kHz audiometric thresholds were elevated with increasing noise exposure for females but not for males. ABRs were measured in response to high-pass (1.5 kHz) filtered clicks of 80 and 100 dB peSPL. Frequency-following responses (FFRs) were measured to 80 dB SPL pure tones from 240 to 285 Hz, and to 80 dB SPL 4 kHz...
Trends in Hearing, 2016
Cochlear synaptopathy (or hidden hearing loss), due to noise exposure or aging, has been demonstr... more Cochlear synaptopathy (or hidden hearing loss), due to noise exposure or aging, has been demonstrated in animal models using histological techniques. However, diagnosis of the condition in individual humans is problematic because of (a) test reliability and (b) lack of a gold standard validation measure. Wave I of the transient-evoked auditory brainstem response is a noninvasive electrophysiological measure of auditory nerve function and has been validated in the animal models. However, in humans, Wave I amplitude shows high variability both between and within individuals. The frequency-following response, a sustained evoked potential reflecting synchronous neural activity in the rostral brainstem, is potentially more robust than auditory brainstem response Wave I. However, the frequency-following response is a measure of central activity and may be dependent on individual differences in central processing. Psychophysical measures are also affected by intersubject variability in cen...
Journal of the Association for Research in Otolaryngology, 2015
The frequency following response (FFR) is a scalprecorded measure of phase-locked brainstem activ... more The frequency following response (FFR) is a scalprecorded measure of phase-locked brainstem activity to stimulus-related periodicities. Three experiments investigated the specificity of the FFR for carrier and modulation frequency using adaptation. FFR waveforms evoked by alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. The first experiment investigated peristimulus adaptation of the FFR for pure and complex tones as a function of stimulus frequency and fundamental frequency (F0). It showed more adaptation of the FFR in response to sounds with higher frequencies or F0s than to sounds with lower frequency or F0s. The second experiment investigated tuning to modulation rate in the FFR. The FFR to a complex tone with a modulation rate of 213 Hz was not reduced more by an adaptor that had the same modulation rate than by an adaptor with a different modulation rate (90 or 504 Hz), thus providing no evidence that the FFR originates mainly from neurons that respond selectively to the modulation rate of the stimulus. The third experiment investigated tuning to audio frequency in the FFR using pure tones. An adaptor that had the same frequency as the target (213 or 504 Hz) did not generally reduce the FFR to the target more than an adaptor that differed in frequency (by 1.24 octaves). Thus, there was no evidence that the FFR originated mainly from neurons tuned to the frequency of the target. Instead, the results are consistent with the suggestion that the FFR for low-frequency pure tones at medium to high levels mainly originates from neurons tuned to higher frequencies. Implications for the use and interpretation of the FFR are discussed.
The Journal of neuroscience : the official journal of the Society for Neuroscience, Jan 4, 2015
When two musical notes with simple frequency ratios are played simultaneously, the resulting musi... more When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Conson...
Hearing Research, 2015
When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness,... more When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness, or "consonance", of a dyad is likely driven by the harmonic relation of the frequency components of the combined spectrum of the two notes. Previous work has demonstrated a relation between individual preference for consonant over dissonant dyads, and the strength of neural temporal coding of the harmonicity of consonant relative to dissonant dyads as measured using the electrophysiological "frequency-following response" (FFR). However, this work also demonstrated that both these variables correlate strongly with musical experience. The current study was designed to determine whether the relation between consonance preference and neural temporal coding is maintained when controlling for musical experience. The results demonstrate that strength of neural coding of harmonicity is predictive of individual preference for consonance even for non-musicians. An additional purpose of the current study was to assess the cochlear generation site of the FFR to low-frequency dyads. By comparing the reduction in FFR strength when high-pass masking noise was added to the output of a model of the auditory periphery, the results provide evidence for the FFR to low-frequency dyads resulting in part from basal cochlear generators.
PLoS ONE, 2013
Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced... more Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated) between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.
NeuroReport, 2007
Report pages Abbreviations defined Conclusion of no more than 200 words Appropriate deduction fro... more Report pages Abbreviations defined Conclusion of no more than 200 words Appropriate deduction from report pages 19,500 character maximum made to account for the space in the body text each figure or table is expected to take up. References Up to 25 references, numbered in the order of citation Illustrations and tables Illustrations prepared for a width of 82mm or 173mm Approximate position of each figure and table indicated in the text All illustrations and tables cited in text Legends on a separate page * Checklist NeuroReport Author submission form Please complete this form in Word by entering information or checking the boxes as appropriate. Ensure document is unprotected in "Tools" in the menu bar in order to write in it.
Neuropsychologia, 2014
When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to... more When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance.
NeuroImage, 2010
Neuroimaging studies of pitch coding seek to identify pitch-related responses separate from respo... more Neuroimaging studies of pitch coding seek to identify pitch-related responses separate from responses to other properties of the stimulus, such as its energy onset, and other general aspects of the listening context. The current study reports the first attempt to evaluate these modulatory influences using functional magnetic resonance imaging (fMRI) measures of cortical pitch representations. Stimulus context was manipulated using a 'classical stimulation paradigm' (whereby successive pitch stimuli were separated by gaps of silence) and a 'continuous stimulation paradigm' (whereby successive pitch stimuli were interspersed with noise to maintain a stable envelope). Pitch responses were measured for two types of pitch-evoking stimuli; a harmonic-complex tone and a complex Huggins pitch. Results for a group of 15 normally hearing listeners revealed that context effects were mostly observed in primary auditory regions, while the most significant pitch responses were localized to posterior nonprimary auditory cortex, specifically planum temporale. Sensitivity to pitch was greater for the continuous stimulation conditions perhaps because they better controlled for concurrent responses to the noise energy onset and reduced the potential problem of a nonlinear fMRI response becoming saturated. These results provide support for hierarchical processing within human auditory cortex, with some parts of primary auditory cortex engaged by general auditory energy, some parts of planum temporale specifically responsible for representing pitch information and adjacent regions that are responsible for complex higher-level auditory processing such as representing pitch information as a function of listening context.
The Journal of the Acoustical Society of America, 2010
Brief complex tone bursts with fundamental frequencies ͑F0s͒ of 100, 125, 166.7, and 250 Hz were ... more Brief complex tone bursts with fundamental frequencies ͑F0s͒ of 100, 125, 166.7, and 250 Hz were bandpass filtered between the 22 nd and 30 th harmonics, to produce waveforms with five regularly occurring envelope peaks ͑"pitch pulses"͒ that evoked pitches associated with their repetition period. Two such tone bursts were presented sequentially and separated by a silent interval of two periods ͑2/F0͒. When the relative phases of the two bursts were varied, such that the interpulse interval ͑IPI͒ between the last pulse of the first burst and the first pulse of the second burst was varied, the pitch of the whole sequence was little affected. This is consistent with previous results suggesting that the pitch integration window may be "reset" by a discontinuity. However, when the interval between the two bursts was filled with a noise with the same spectral envelope as the complex, variations in IPI had substantial effects on the pitch of the sequence. It is suggested that the presence of the noise causes the two tones bursts to appear continuous, hence, resetting does not occur, and the pitch mechanism is sensitive to the phase discontinuity across the silent interval.
The Journal of the Acoustical Society of America, 2009
Fundamental frequency ͑F0͒ discrimination between two sequentially presented complex ͑target͒ ton... more Fundamental frequency ͑F0͒ discrimination between two sequentially presented complex ͑target͒ tones can be impaired in the presence of an additional complex tone ͑the interferer͒ even when filtered into a remote spectral region ͓Gockel, H., et al. ͑2004͒. J. Acoust. Soc. Am. 116, 1092-1104͔. This "pitch discrimination interference" ͑PDI͒ is greatest when the interferer and target have similar F0s. The present study measured PDI using monaural or diotic complex-tone interferers and "Huggins pitch" or diotic complex-tone targets. The first experiment showed that listeners hear a "complex Huggins pitch" ͑CHP͒, approximately corresponding to F0, when multiple phase transitions at harmonics of ͑but not at͒ F0 are present. The accuracy of pitch matches to the CHP was similar to that for an equally loud diotic tone complex presented in noise. The second experiment showed that PDI can occur when the target is a CHP while the interferer is a diotic or monaural complex tone. In a third experiment, similar amounts of PDI were observed for CHP targets and for loudness-matched diotic complex-tone targets. Thus, a conventional complex tone and CHP appear to be processed in common at the stage where PDI occurs.
The Journal of the Acoustical Society of America, 2009
Pitch discrimination interference (PDI) is an impairment in fundamental frequency (F0) discrimina... more Pitch discrimination interference (PDI) is an impairment in fundamental frequency (F0) discrimination between two sequentially presented complex (target) tones produced by another complex tone (the interferer) that is filtered into a remote spectral frequency region. Micheyl and Oxenham (2007) reported a modest PDI for target tones and interferers both containing resolved harmonics when the F0 difference between the two target tones (∆F0) was small. When the interferer was in a lower spectral region than the target, a much larger PDI was observed when ∆F0 was large (14%-20%), and, under these conditions, performance in the presence of an interferer was worse than at smaller ∆F0s. The present study replicated the occurrence of PDI for complex tones containing resolved harmonics for small ∆F0s. In contrast to Micheyl and Oxenham's findings, performance in the presence of an interferer always increased monotonically with increasing ∆F0. However, when the interferer was in a lower spectral region than the target (and not vice versa), some subjects needed verbal instructions or modified stimuli to choose the correct cue, indicating an asymmetry in spontaneous obviousness of the correct listening cue across conditions.
The Journal of the Acoustical Society of America, 1998
The purpose of this study is to clarify the role of suppression in the growth of masking when a s... more The purpose of this study is to clarify the role of suppression in the growth of masking when a signal is well above the masker in frequency ͑upward spread of masking͒. Classical psychophysical models assume that masking is primarily due to the spread of masker excitation, and that the nonlinear upward spread of masking reflects a differential growth in excitation between the masker and the signal at the signal frequency. In contrast, recent physiological studies have indicated that upward spread of masking in the auditory nerve is due to the increasing effect of suppression with increasing masker level. This study compares thresholds for signals between 2.4 and 5.6 kHz in simultaneous and nonsimultaneous masking for conditions in which the masker is either at or well below the signal frequency. Maximum differences between simultaneous and nonsimultaneous masking were small ͑Ͻ6 dB͒ for the on-frequency conditions but larger for the off-frequency conditions ͑15-32 dB͒. The results suggest that suppression plays a major role in determining thresholds at high masker levels, when the masker is well below the signal in frequency. This is consistent with the conclusions of physiological studies. However, for signal levels higher than about 40 dB SPL, the growth of masking for signals above the masker frequency is nonlinear even in the nonsimultaneous-masking conditions, where suppression is not expected. This is consistent with an explanation based on the compressive response of the basilar membrane, and confirms that suppression is not necessary for nonlinear upward spread of masking.
The Journal of the Acoustical Society of America, 2000
The experiment compared the pitches of complex tones consisting of unresolved harmonics. The fund... more The experiment compared the pitches of complex tones consisting of unresolved harmonics. The fundamental frequency (F0) of the tones was 250 Hz and the harmonics were bandpass filtered between 5500 and 7500 Hz. Two 20-ms complex-tone bursts were presented, separated by a brief gap. The gap was an integer number of periods of the waveform: 0, 4, or 8 ms. The envelope phase of the second tone burst was shifted, such that the interpulse interval ͑IPI͒ across the gap was reduced or increased by 0.25 or 0.75 periods ͑1 or 3 ms͒. A ''no shift'' control was also included, where the IPI was held at an integer number of periods. Pitch matches were obtained by varying the F0 of a comparison tone with the same temporal parameters as the standard but without the shift. Relative to the no-shift control, the variations in IPI produced substantial pitch shifts when there was no gap between the bursts, but little effect was seen for gaps of 4 or 8 ms. However, for some conditions with the same IPI in the shifted interval, an increase in the IPI of the comparison interval from 4 to 8 ms ͑gap increased from 0 to 4 ms͒ changed the pitch match. The presence of a pitch shift suggests that the pitch mechanism is integrating information across the two tone bursts. It is argued that the results are consistent with a pitch mechanism employing a long integration time for continuous stimuli that is reset in response to temporal discontinuities. For a 250-Hz F0, an 8-ms IPI may be sufficient for resetting. Pitch models based on a spectral analysis of the simulated neural spike train, on an autocorrelation of the spike train, and on the mean rate of pitch pulses, all failed to account for the observed pitch matches.
Journal of the Association for Research in Otolaryngology, 2013
The neural mechanisms of pitch coding have been debated for more than a century. The two main mec... more The neural mechanisms of pitch coding have been debated for more than a century. The two main mechanisms are coding based on the profiles of neural firing rates across auditory nerve fibers with different characteristic frequencies (place-rate coding), and coding based on the phase-locked temporal pattern of neural firing (temporal coding). Phase locking precision can be partly assessed by recording the frequency-following response (FFR), a scalp-recorded electrophysiological response that reflects synchronous activity in subcortical neurons. Although features of the FFR have been widely used as indices of pitch coding acuity, only a handful of studies have directly investigated the relation between the FFR and behavioral pitch judgments. Furthermore, the contribution of degraded neural synchrony (as indexed by the FFR) to the pitch perception impairments of older listeners and those with hearing loss is not well known. Here, the relation between the FFR and pure-tone frequency disc...
Hearing Research, 2020
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise expo... more Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. Because CS affects mainly low-spontaneous rate auditory nerve fibers, differential electrophysiological measures such as the ratio of the amplitude of wave I of the auditory brainstem response (ABR) at high to low click levels (WI H /WI L), and the difference between frequency following response (FFR) levels to shallow and deep amplitude modulated tones (FFR S-FFR D), have been proposed as CS markers. However, age-related audiometric threshold shifts, particularly prominent at high frequencies, may confound the interpretation of these measures in cross-sectional studies of age-related CS. To address this issue, we measured WI H /WI L and FFR S-FFR D using highpass masking (HP) noise to eliminate the contribution of high-frequency cochlear regions to the responses in a crosssectional sample of 102 subjects (34 young, 34 middle-aged, 34 elderly). WI H /WI L in the presence of the HP noise did not decrease as a function of age. However, in the absence of HP noise, WI H /WI L showed credible age-related decreases even after partialing out the effects of audiometric threshold shifts. No credible age-related decreases of FFR S-FFR D were found. Overall, the results do not provide evidence of age-related CS in the low-frequency region where the responses were restricted by the HP noise, but are consistent with the presence of age-related CS in higher frequency regions.
Hearing Research, 2021
Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise expo... more Age-related cochlear synaptopathy (CS) has been shown to occur in rodents with minimal noise exposure, and has been hypothesized to play a crucial role in age-related hearing declines in humans. It is not known to what extent age-related CS occurs in humans, and how it affects the coding of supra-threshold sounds and speech in noise. Because in rodents CS affects mainly lowand medium-spontaneous rate (L/M-SR) auditory nerve fibers with rate-level functions covering medium-high levels, it should lead to greater deficits in the processing of sounds at high than at low stimulus levels. In this cross-sectional study the performance of 102 listeners across the age range (34 young, 34 middle-aged, 34 older) was assessed in a set of psychophysical temporal processing and speech reception in noise tests at both low, and high stimulus levels. Mixed-effect multiple regression models were used to estimate the effects of age while partialing out effects of audiometric thresholds, lifetime noise exposure, cognitive abilities (assessed with additional tests), and musical experience. Age was independently associated with performance deficits on several tests. However, only for one out of 13 tests were age effects credibly larger at the high compared to the low stimulus level. Overall these results do not provide much evidence that age-related CS, to the extent to which it may occur in humans according to the rodent model of greater L/M-SR synaptic loss, has substantial effects on psychophysical measures of auditory temporal processing or on speech reception in noise.
Hearing Research, 2019
The relative importance of neural temporal and place coding in auditory perception is still a mat... more The relative importance of neural temporal and place coding in auditory perception is still a matter of much debate. The current article is a compilation of viewpoints from leading auditory psychophysicists and physiologists regarding the upper frequency limit for the use of neural phase locking to code temporal fine structure in humans. While phase locking is used for binaural processing up to about 1500 Hz, there is disagreement regarding the use of monaural phase-locking information at higher frequencies. Estimates of the general upper limit proposed by the contributors range from 1500 to 10000 Hz. The arguments depend on whether or not phase locking is needed to explain psychophysical discrimination performance at frequencies above 1500 Hz, and whether or not the phase-locked neural representation is sufficiently robust at these frequencies to provide useable information. The contributors suggest key experiments that may help to resolve this issue, and experimental findings that may cause them to change their minds. This issue is of crucial importance to our understanding of the neural basis of auditory perception in general, and of pitch perception in particular. Keywords Phase locking; temporal fine structure; temporal coding; place coding; pitch Highlights Phase locking is used in binaural processing for frequencies up to ~1500 Hz. Estimates of the general upper limit (inc. monaural processing) vary from 1500 to 10000 Hz. Direct recordings from human auditory nerve would determine peripheral limitation. 3 Understanding of the central processing of temporal and place cues is needed to establish an upper limit.
The Journal of the Acoustical Society of America, 2016
Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very li... more Objectives: Diabetes mellitus (DM) is associated with a variety of sensory complications. Very little attention has been given to auditory neuropathic complications in DM. The aim of this study was to determine whether type 1 DM (T1DM) affects neural coding of the rapid temporal fluctuations of sounds, and how any deficits may impact on behavioral performance. Design: Participants were 30 young normal-hearing T1DM patients, and 30 age-, sex-, and audiogram-matched healthy controls. Measurements included: electrophysiological measures of auditory nerve and brainstem function using the click-evoked auditory brainstem response (ABR), and of brainstem neural temporal coding using the sustained frequency-following response (FFR); behavioral tests of temporal coding (interaural phase difference, IPD, discrimination and the frequency difference limen, FDL); tests of speech perception in noise; and self-report measures of auditory disability measures using the Speech, Spatial and Qualities (SSQ) hearing scale. Results: There were no significant differences between T1DM patients and controls in the ABR. However, the T1DM group showed significantly reduced FFRs to both temporal envelope and temporal fine structure. The T1DM group also showed significantly higher IPD and FDL thresholds, worse speech-in-noise performance, as well as lower overall SSQ scores than the control group. Conclusions: These findings suggest that T1DM is associated with degraded neural temporal coding in the brainstem in the absence of an elevation in audiometric threshold, and that the FFR may provide an early indicator of neural damage in T1DM, before any abnormalities can be identified using standard clinical tests. However, the relation between the neural deficits and the behavioral deficits is uncertain.
The Journal of the Acoustical Society of America, 2017
Old, hearing-impaired listeners generally benefit little from lateral separation of multiple talk... more Old, hearing-impaired listeners generally benefit little from lateral separation of multiple talkers when listening to one of them. This study aimed to determine how spatial release from masking (SRM) in such listeners is affected when the interaural time differences (ITDs) in the temporal fine structure (TFS) are manipulated by tone-vocoding (TVC) at the ears by a master hearing aid system. Word recall was compared, with and without TVC, when target and masker sentences from a closed set were played simultaneously from the front loudspeaker (co-located) and when the maskers were played 45° to the left and right of the listener (separated). For 20 hearing-impaired listeners aged 64 to 86, SRM was 3.7 dB smaller with TVC than without TVC. This difference in SRM correlated with mean audiometric thresholds below 1.5 kHz, even when monaural TFS sensitivity (discrimination of frequency-shifts in identically filtered complexes) was partialed out, suggesting that low-frequency audiometric ...
Hearing research, Jan 2, 2016
Noise-induced cochlear synaptopathy has been demonstrated in numerous rodent studies. In these an... more Noise-induced cochlear synaptopathy has been demonstrated in numerous rodent studies. In these animal models, the disorder is characterized by a reduction in amplitude of wave I of the auditory brainstem response (ABR) to high-level stimuli, whereas the response at threshold is unaffected. The aim of the present study was to determine if this disorder is prevalent in young adult humans with normal audiometric hearing. One hundred and twenty six participants (75 females) aged 18-36 were tested. Participants had a wide range of lifetime noise exposures as estimated by a structured interview. Audiometric thresholds did not differ across noise exposures up to 8 kHz, although 16-kHz audiometric thresholds were elevated with increasing noise exposure for females but not for males. ABRs were measured in response to high-pass (1.5 kHz) filtered clicks of 80 and 100 dB peSPL. Frequency-following responses (FFRs) were measured to 80 dB SPL pure tones from 240 to 285 Hz, and to 80 dB SPL 4 kHz...
Trends in Hearing, 2016
Cochlear synaptopathy (or hidden hearing loss), due to noise exposure or aging, has been demonstr... more Cochlear synaptopathy (or hidden hearing loss), due to noise exposure or aging, has been demonstrated in animal models using histological techniques. However, diagnosis of the condition in individual humans is problematic because of (a) test reliability and (b) lack of a gold standard validation measure. Wave I of the transient-evoked auditory brainstem response is a noninvasive electrophysiological measure of auditory nerve function and has been validated in the animal models. However, in humans, Wave I amplitude shows high variability both between and within individuals. The frequency-following response, a sustained evoked potential reflecting synchronous neural activity in the rostral brainstem, is potentially more robust than auditory brainstem response Wave I. However, the frequency-following response is a measure of central activity and may be dependent on individual differences in central processing. Psychophysical measures are also affected by intersubject variability in cen...
Journal of the Association for Research in Otolaryngology, 2015
The frequency following response (FFR) is a scalprecorded measure of phase-locked brainstem activ... more The frequency following response (FFR) is a scalprecorded measure of phase-locked brainstem activity to stimulus-related periodicities. Three experiments investigated the specificity of the FFR for carrier and modulation frequency using adaptation. FFR waveforms evoked by alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. The first experiment investigated peristimulus adaptation of the FFR for pure and complex tones as a function of stimulus frequency and fundamental frequency (F0). It showed more adaptation of the FFR in response to sounds with higher frequencies or F0s than to sounds with lower frequency or F0s. The second experiment investigated tuning to modulation rate in the FFR. The FFR to a complex tone with a modulation rate of 213 Hz was not reduced more by an adaptor that had the same modulation rate than by an adaptor with a different modulation rate (90 or 504 Hz), thus providing no evidence that the FFR originates mainly from neurons that respond selectively to the modulation rate of the stimulus. The third experiment investigated tuning to audio frequency in the FFR using pure tones. An adaptor that had the same frequency as the target (213 or 504 Hz) did not generally reduce the FFR to the target more than an adaptor that differed in frequency (by 1.24 octaves). Thus, there was no evidence that the FFR originated mainly from neurons tuned to the frequency of the target. Instead, the results are consistent with the suggestion that the FFR for low-frequency pure tones at medium to high levels mainly originates from neurons tuned to higher frequencies. Implications for the use and interpretation of the FFR are discussed.
The Journal of neuroscience : the official journal of the Society for Neuroscience, Jan 4, 2015
When two musical notes with simple frequency ratios are played simultaneously, the resulting musi... more When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Conson...
Hearing Research, 2015
When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness,... more When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness, or "consonance", of a dyad is likely driven by the harmonic relation of the frequency components of the combined spectrum of the two notes. Previous work has demonstrated a relation between individual preference for consonant over dissonant dyads, and the strength of neural temporal coding of the harmonicity of consonant relative to dissonant dyads as measured using the electrophysiological "frequency-following response" (FFR). However, this work also demonstrated that both these variables correlate strongly with musical experience. The current study was designed to determine whether the relation between consonance preference and neural temporal coding is maintained when controlling for musical experience. The results demonstrate that strength of neural coding of harmonicity is predictive of individual preference for consonance even for non-musicians. An additional purpose of the current study was to assess the cochlear generation site of the FFR to low-frequency dyads. By comparing the reduction in FFR strength when high-pass masking noise was added to the output of a model of the auditory periphery, the results provide evidence for the FFR to low-frequency dyads resulting in part from basal cochlear generators.
PLoS ONE, 2013
Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced... more Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated) between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.
NeuroReport, 2007
Report pages Abbreviations defined Conclusion of no more than 200 words Appropriate deduction fro... more Report pages Abbreviations defined Conclusion of no more than 200 words Appropriate deduction from report pages 19,500 character maximum made to account for the space in the body text each figure or table is expected to take up. References Up to 25 references, numbered in the order of citation Illustrations and tables Illustrations prepared for a width of 82mm or 173mm Approximate position of each figure and table indicated in the text All illustrations and tables cited in text Legends on a separate page * Checklist NeuroReport Author submission form Please complete this form in Word by entering information or checking the boxes as appropriate. Ensure document is unprotected in "Tools" in the menu bar in order to write in it.
Neuropsychologia, 2014
When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to... more When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance.
NeuroImage, 2010
Neuroimaging studies of pitch coding seek to identify pitch-related responses separate from respo... more Neuroimaging studies of pitch coding seek to identify pitch-related responses separate from responses to other properties of the stimulus, such as its energy onset, and other general aspects of the listening context. The current study reports the first attempt to evaluate these modulatory influences using functional magnetic resonance imaging (fMRI) measures of cortical pitch representations. Stimulus context was manipulated using a 'classical stimulation paradigm' (whereby successive pitch stimuli were separated by gaps of silence) and a 'continuous stimulation paradigm' (whereby successive pitch stimuli were interspersed with noise to maintain a stable envelope). Pitch responses were measured for two types of pitch-evoking stimuli; a harmonic-complex tone and a complex Huggins pitch. Results for a group of 15 normally hearing listeners revealed that context effects were mostly observed in primary auditory regions, while the most significant pitch responses were localized to posterior nonprimary auditory cortex, specifically planum temporale. Sensitivity to pitch was greater for the continuous stimulation conditions perhaps because they better controlled for concurrent responses to the noise energy onset and reduced the potential problem of a nonlinear fMRI response becoming saturated. These results provide support for hierarchical processing within human auditory cortex, with some parts of primary auditory cortex engaged by general auditory energy, some parts of planum temporale specifically responsible for representing pitch information and adjacent regions that are responsible for complex higher-level auditory processing such as representing pitch information as a function of listening context.
The Journal of the Acoustical Society of America, 2010
Brief complex tone bursts with fundamental frequencies ͑F0s͒ of 100, 125, 166.7, and 250 Hz were ... more Brief complex tone bursts with fundamental frequencies ͑F0s͒ of 100, 125, 166.7, and 250 Hz were bandpass filtered between the 22 nd and 30 th harmonics, to produce waveforms with five regularly occurring envelope peaks ͑"pitch pulses"͒ that evoked pitches associated with their repetition period. Two such tone bursts were presented sequentially and separated by a silent interval of two periods ͑2/F0͒. When the relative phases of the two bursts were varied, such that the interpulse interval ͑IPI͒ between the last pulse of the first burst and the first pulse of the second burst was varied, the pitch of the whole sequence was little affected. This is consistent with previous results suggesting that the pitch integration window may be "reset" by a discontinuity. However, when the interval between the two bursts was filled with a noise with the same spectral envelope as the complex, variations in IPI had substantial effects on the pitch of the sequence. It is suggested that the presence of the noise causes the two tones bursts to appear continuous, hence, resetting does not occur, and the pitch mechanism is sensitive to the phase discontinuity across the silent interval.
The Journal of the Acoustical Society of America, 2009
Fundamental frequency ͑F0͒ discrimination between two sequentially presented complex ͑target͒ ton... more Fundamental frequency ͑F0͒ discrimination between two sequentially presented complex ͑target͒ tones can be impaired in the presence of an additional complex tone ͑the interferer͒ even when filtered into a remote spectral region ͓Gockel, H., et al. ͑2004͒. J. Acoust. Soc. Am. 116, 1092-1104͔. This "pitch discrimination interference" ͑PDI͒ is greatest when the interferer and target have similar F0s. The present study measured PDI using monaural or diotic complex-tone interferers and "Huggins pitch" or diotic complex-tone targets. The first experiment showed that listeners hear a "complex Huggins pitch" ͑CHP͒, approximately corresponding to F0, when multiple phase transitions at harmonics of ͑but not at͒ F0 are present. The accuracy of pitch matches to the CHP was similar to that for an equally loud diotic tone complex presented in noise. The second experiment showed that PDI can occur when the target is a CHP while the interferer is a diotic or monaural complex tone. In a third experiment, similar amounts of PDI were observed for CHP targets and for loudness-matched diotic complex-tone targets. Thus, a conventional complex tone and CHP appear to be processed in common at the stage where PDI occurs.
The Journal of the Acoustical Society of America, 2009
Pitch discrimination interference (PDI) is an impairment in fundamental frequency (F0) discrimina... more Pitch discrimination interference (PDI) is an impairment in fundamental frequency (F0) discrimination between two sequentially presented complex (target) tones produced by another complex tone (the interferer) that is filtered into a remote spectral frequency region. Micheyl and Oxenham (2007) reported a modest PDI for target tones and interferers both containing resolved harmonics when the F0 difference between the two target tones (∆F0) was small. When the interferer was in a lower spectral region than the target, a much larger PDI was observed when ∆F0 was large (14%-20%), and, under these conditions, performance in the presence of an interferer was worse than at smaller ∆F0s. The present study replicated the occurrence of PDI for complex tones containing resolved harmonics for small ∆F0s. In contrast to Micheyl and Oxenham's findings, performance in the presence of an interferer always increased monotonically with increasing ∆F0. However, when the interferer was in a lower spectral region than the target (and not vice versa), some subjects needed verbal instructions or modified stimuli to choose the correct cue, indicating an asymmetry in spontaneous obviousness of the correct listening cue across conditions.
The Journal of the Acoustical Society of America, 1998
The purpose of this study is to clarify the role of suppression in the growth of masking when a s... more The purpose of this study is to clarify the role of suppression in the growth of masking when a signal is well above the masker in frequency ͑upward spread of masking͒. Classical psychophysical models assume that masking is primarily due to the spread of masker excitation, and that the nonlinear upward spread of masking reflects a differential growth in excitation between the masker and the signal at the signal frequency. In contrast, recent physiological studies have indicated that upward spread of masking in the auditory nerve is due to the increasing effect of suppression with increasing masker level. This study compares thresholds for signals between 2.4 and 5.6 kHz in simultaneous and nonsimultaneous masking for conditions in which the masker is either at or well below the signal frequency. Maximum differences between simultaneous and nonsimultaneous masking were small ͑Ͻ6 dB͒ for the on-frequency conditions but larger for the off-frequency conditions ͑15-32 dB͒. The results suggest that suppression plays a major role in determining thresholds at high masker levels, when the masker is well below the signal in frequency. This is consistent with the conclusions of physiological studies. However, for signal levels higher than about 40 dB SPL, the growth of masking for signals above the masker frequency is nonlinear even in the nonsimultaneous-masking conditions, where suppression is not expected. This is consistent with an explanation based on the compressive response of the basilar membrane, and confirms that suppression is not necessary for nonlinear upward spread of masking.
The Journal of the Acoustical Society of America, 2000
The experiment compared the pitches of complex tones consisting of unresolved harmonics. The fund... more The experiment compared the pitches of complex tones consisting of unresolved harmonics. The fundamental frequency (F0) of the tones was 250 Hz and the harmonics were bandpass filtered between 5500 and 7500 Hz. Two 20-ms complex-tone bursts were presented, separated by a brief gap. The gap was an integer number of periods of the waveform: 0, 4, or 8 ms. The envelope phase of the second tone burst was shifted, such that the interpulse interval ͑IPI͒ across the gap was reduced or increased by 0.25 or 0.75 periods ͑1 or 3 ms͒. A ''no shift'' control was also included, where the IPI was held at an integer number of periods. Pitch matches were obtained by varying the F0 of a comparison tone with the same temporal parameters as the standard but without the shift. Relative to the no-shift control, the variations in IPI produced substantial pitch shifts when there was no gap between the bursts, but little effect was seen for gaps of 4 or 8 ms. However, for some conditions with the same IPI in the shifted interval, an increase in the IPI of the comparison interval from 4 to 8 ms ͑gap increased from 0 to 4 ms͒ changed the pitch match. The presence of a pitch shift suggests that the pitch mechanism is integrating information across the two tone bursts. It is argued that the results are consistent with a pitch mechanism employing a long integration time for continuous stimuli that is reset in response to temporal discontinuities. For a 250-Hz F0, an 8-ms IPI may be sufficient for resetting. Pitch models based on a spectral analysis of the simulated neural spike train, on an autocorrelation of the spike train, and on the mean rate of pitch pulses, all failed to account for the observed pitch matches.