Auditory Perception Research Papers - Academia.edu (original) (raw)
2025, Aportes de la Lingüística Cognitiva al estudio y la enseñanza de la lengua y la literatura: XI Simposio de la Asociación Argentina de Lingüística Cognitiva
Este estudio investigó la incidencia de la modalidad auditiva en la percepción de dos acentos tonales nucleares del inglés en hablantes de español rioplatense. El objetivo fue determinar si el acento tonal nuclear del inglés (L2) H*L-H%,... more
Este estudio investigó la incidencia de la modalidad auditiva en la percepción de dos acentos tonales nucleares del inglés en hablantes de español rioplatense. El objetivo fue determinar si el acento tonal nuclear del inglés (L2) H*L-H%, una forma no existente en el español (L1), presentaba un mayor grado de dificultad que el acento H*L-L%, una forma
existente en ambas lenguas, y si la identificación de ambos acentos variaba dependiendo del modo de acceso: psico-acústico o pragmático.
Treinta y dos participantes escucharon una lista de oraciones con ambos acentos tonales nucleares en dos modalidades auditivas: una orientada al significado pragmático (aserción o pregunta) y la otra orientada a la categorización psico-acústica (descendente o descendente-ascendente). Los resultados sugieren que el acento nuclear no existente en la
lengua materna, la forma H*L-H%, fue más difícil de procesar que la forma H*L-L% en ambas modalidades. La tarea orientada al significado pragmático promovió una interpretación más exitosa de la forma H*L-L% como aserción, pero dificultó la interpretación de la forma H*L-H%
como pregunta. Un enfoque en las formas, promovido por la modalidad psico-acústica, facilitó la correcta categorización de la forma H*L-H% como descendente-ascendente. Se concluyó que el acento tonal nuclear nuevo se hace accesible en mayor o menor grado dependiendo
de la modalidad auditiva empleada: la tarea orientada hacia el significado pragmático promueve una interpretación más precisa del acento existente en la L2 similar al existente en la L1, mientras que la atención a la forma facilita la categorización de una forma de la L2 nueva.
2025, Brain Research
It is a well-known fact that attention is crucial for driving a car. This innovative study aims to assess the impact of attentional workload modulation on cerebral activity during a simulated driving task using magnetoencephalography... more
It is a well-known fact that attention is crucial for driving a car. This innovative study aims to assess the impact of attentional workload modulation on cerebral activity during a simulated driving task using magnetoencephalography (MEG). A car simulator equipped with a steering wheel, turn indicators, an accelerator and a brake pedal has been specifically designed to be used with MEG. Attentional demand has been modulated using a radio broadcast. During half of the driving scenarios, subjects could ignore the broadcast (simple task, ST) and during the other half, they had to actively listen to it in order to answer 3 questions (dual task, DT). Evoked magnetic responses were computed in both conditions separately for two visual stimuli of interest: traffic lights (from green to amber) and direction signs (arrows to the right or to the left) shown on boards. The cortical sources of these activities have been estimated using a minimum-norm current estimates modeling technique. Results show the activation of a large distributed network similar in ST and DT and similar for both the traffic lights and the direction signs. This network mainly involves sensory visual areas as well as parietal and frontal regions known to play a role in selective attention and motor areas. The increase of attentional demand affects the neuronal processing of relevant visual information for driving, as early as the perceptual stage. By demonstrating the feasibility of recording MEG activity during an interactive simulated driving task, this study opens new possibilities for investigating issues regarding drivers' activity.
2025, The International Tinnitus Journal
Introduction: Tinnitus is considered as one of the major symptom associated with many pathologies along with its presence in individuals with normal hearing. With varied cause and mechanism of its generation and increase in number of... more
Introduction: Tinnitus is considered as one of the major symptom associated with many pathologies along with its presence in individuals with normal hearing. With varied cause and mechanism of its generation and increase in number of individuals with complaint of tinnitus, rehabilitation becomes crucial for Audiologists. Objective: Study was undertaken to find the efficacy of Residual Inhibition Therapy as treatment procedure for unilateral tinnitus in individuals with normal hearing by comparing pre and post Residual Inhibition Therapy, Contra-lateral Acoustic Reflexes and Tinnitus Handicap Inventory. Materials and Methods: Ten subjects between the age range of 20-45 years were included for the study. Tinnitus pitch and intensity were matched and Residual Inhibition Therapy was provided. The Pre Residual Inhibition Therapy, contra-lateral acoustic reflexes and Tinnitus Handicap Inventory scores were compared to post Residual Inhibition Therapy, contra-lateral acoustic reflexes and Tinnitus Handicap Inventory scores. Results and Conclusion: Statistical analysis revealed no significant difference, however, elevated contra-lateral acoustics reflexes post Residual Inhibition Therapy were seen. Even though small sample size made it hard to conclude on the effectiveness of Residual Inhibition Therapy as treatment of tinnitus, elevated contralateral acoustic reflexes post Residual Inhibition Therapy pay way for further advanced studies on the same.
2025, Indian Journal of Otology
2025
Emotional dysregulation is often attributed to external stressors, biological predispositions, or clinical conditions, yet an overlooked contributor may lie in the everyday auditory environments individuals create for themselves. This... more
Emotional dysregulation is often attributed to external stressors, biological predispositions, or clinical conditions, yet an overlooked contributor may lie in the everyday auditory environments individuals create for themselves. This paper introduces the concept of Affective Soundscaping, a theoretical model describing how self-selected auditory input—such as music, talk media, television, and podcasts—shapes emotional baseline, physiological arousal, and behavioral expression. These effects are especially pronounced in domains where auditory choice is habitual, such as in the home, during commutes, and within workspace routines. Drawing from psychoacoustics, affective neuroscience, and self-regulation theory, the model outlines a five-stage process: auditory input, physiological activation, affective state, behavioral outcome, and feedback loop. Unlike existing research on music therapy or environmental sound exposure, this framework emphasizes the cumulative psychological effects of habitual, volitional sound choices in home, work, and mobile contexts. The model suggests that emotional reactivity, anxiety, or volatility may be less a matter of temperament and more a function of repeated exposure to dysregulating soundscapes. It also highlights how many individuals come to mistake their conditioned emotional state for their inherent personality. By reframing sound selection as a behavioral health variable, this paper encourages greater awareness of sensory self-regulation and positions auditory input as a potential lever for emotional clarity and psychological well-being.
2025, Animal Behaviour
Predictions from two models of partial prey consumption were tested using antlion larvae (third instar Myrmeleon mobilis), a sit-and-wait predator. Griffiths' (1980) 'Digestion Rate Limitation' model correctly predicted decreased handling... more
Predictions from two models of partial prey consumption were tested using antlion larvae (third instar Myrmeleon mobilis), a sit-and-wait predator. Griffiths' (1980) 'Digestion Rate Limitation' model correctly predicted decreased handling time and increased ingestion rate with increasing encounter rates. The model incorrectly predicted constant percentage extraction; percentage extraction changed significantly with encounter rate. An optimality model appropriate for ambush predators (Lucas & Grafen, in press) qualitatively matched observations, although antlions always discarded prey somewhat earlier than predicted. Thus neither of the models of partial prey consumption quantitatively fits observations. This reduction in handling time has a minor influence on the rate of energy intake, and therefore may be adaptive if other factors are taken into account. I show that discarding prey early is adaptive if prey that arrive when the predator is empty handed are more easily caught than those that arrive when the predator is eating. Preliminary results support this assumption.
2025, Animal Behaviour
Coevolution between senders and receivers is expected to produce a close match between signal design and sensory biology. We evaluated this hypothesis in songbirds by comparing aspects of acoustic signal space with the frequency range of... more
Coevolution between senders and receivers is expected to produce a close match between signal design and sensory biology. We evaluated this hypothesis in songbirds by comparing aspects of acoustic signal space with the frequency range of auditory sensitivity and temporal resolution in tufted titmice, Baeolophus bicolor; house sparrows, Passer domesticus; and white-breasted nuthatches, Sitta carolinensis. Auditory measurements were made electrophysiologically from the scalp using two classes of auditory-evoked potentials: the auditory brain-stem response (ABR) and the envelope-following response (EFR). ABRs to tone-burst stimuli indicated maximum sensitivity from 2.2 to 3.2 kHz in all species, but 12e14 dB greater sensitivity in titmice than in sparrows and nuthatches at 6.4 kHz (the highest frequency tested). Modulation rate transfer functions based on EFRs to amplitude-modulated tones suggested greater temporal resolution in titmice and sparrows than in nuthatches. Conservation of the frequency range of maximum sensitivity across species resulted in a mismatch with the dominant frequency of song in sparrows. The mismatch may reflect auditory constraints coupled with selection for high-frequency song and relaxed selection for a close match between sender and receiver due to small territory size. Consistent with coevolution between senders and receivers, high-frequency sensitivity varied with the maximum frequency of species-specific vocalizations, whereas temporal resolution varied with the maximum rate of envelope periodicity. Enhanced high-frequency sensitivity of the titmouse may reflect a specialization for processing high-frequency communication signals such as alarm calls.
2025, The Language Learning Journal
Recent policy reforms in Scotland mean that all primary teachers are expected to teach a foreign language (FL) to children from age 5, introducing a second language around age 9. This small-scale research study aimed to ascertain 38... more
Recent policy reforms in Scotland mean that all primary teachers are expected to teach a foreign language (FL) to children from age 5, introducing a second language around age 9. This small-scale research study aimed to ascertain 38 primary teachers' perceptions of their confidence to teach a FL to primary learners and what they felt would be helpful in developing their language proficiency and language teaching pedagogy. The teachers, while enthusiastic about the thinking behind the policy, expressed concern about their ability to provide a good model of language to their classes and their own development as learners of a language while simultaneously having to teach it. FL assistants, secondary colleagues and FL development officers were seen as valuable sources of support, but questions were raised about the sustainability of the policy without long-term permanent commitment.
2025, HAL (Le Centre pour la Communication Scientifique Directe)
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or... more
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
2025, Seminars in hearing
Ira Hirsh was among the first to recognize that the auditory system does not deal with temporal information in a unitary way across the continuum of time intervals involved in speech processing. He identified the short range (extending... more
Ira Hirsh was among the first to recognize that the auditory system does not deal with temporal information in a unitary way across the continuum of time intervals involved in speech processing. He identified the short range (extending from 1 to 20 milliseconds) as that of phase perception, the range between 20 and 100 milliseconds as that in which auditory patterns emerge, and the long range from 100 milliseconds and longer as that of separate auditory events. Furthermore, he also was among the first to recognize that auditory time perception heavily depended on spectral context. A study of the perception of sequences representing different temporal orders of three tones, by Hirsh and the author (e.g., Divenyi and Hirsh, 1978) demonstrated the dependence of auditory sequence perception on both time range and spectral context, and provided a bridge between Hirsh's view of auditory time and Bregman's view of stream segregation. A subsequent search by the author for psychophys...
2025, The Journal of the Acoustical Society of America
Trained listeners were asked to discriminate, in a 2AFC paradigm, a nominally zero duration and a longer gap marked by a 10-ms and a 100-ms tone burst. Confirming earlier results (Divenyi and Sachs, Perc. Psychophys. 24, 429–436 [1978]),... more
Trained listeners were asked to discriminate, in a 2AFC paradigm, a nominally zero duration and a longer gap marked by a 10-ms and a 100-ms tone burst. Confirming earlier results (Divenyi and Sachs, Perc. Psychophys. 24, 429–436 [1978]), the average duration of a just discriminable (d′ = 1.0) gap was found to be 0.87 ms when the frequency of both markers was fixed at 1 kHz (the Δf = 0 condition), whereas it increased to 5.1 ms when the frequency of the first marker was 2 kHz and that of the second was 500 Hz (the Δf = 2 octave condition). In subsequent experiments the stimulus in each trial was preceded by three presentations of an “adaptor” sound. The duration of each adaptor was 300 ms and each of them was followed by a 150-ms silent interval. Adaptors were either tone bursts (0.5, 1, or 2 kHz) or bursts of bandpass noise (0.5 to 2 kHz). Within each block of trials only one type of adaptor sound was used. Long on- and offset ramps (50 ms) attempted to minimize the timing cues in t...
2025, The Journal of the Acoustical Society of America
A binaural complex consisting of two dichotically presented tones close in frequency will have a pitch different from that of either component presented monaurally. The object of the present experiment was to study the pitch of the... more
A binaural complex consisting of two dichotically presented tones close in frequency will have a pitch different from that of either component presented monaurally. The object of the present experiment was to study the pitch of the binaural complex as a function of both interaural intensity difference and the bandwidth of one of the dichotic components. The dichotic stimulus consisted of a 100-ms burst of narrow-band noise centered at 1700 Hz and of a simultaneous, 100-ms pure tone having its frequency fixed either at 1650 or 1750 Hz. Using a modified Method-of-Adjustments procedure, the listeners matched the pitch of various dichotic tone-noise complexes to that of a binaural tone of variable frequency. Results indicate that the effective bandwidth of the monaural components contributing to the pitch of such complexes corresponds to that of the Critical Band. Relevance of the data to a central pitch processor theory will be discussed. [Work supported by the Veterans Administration.]
2025, The Journal of the Acoustical Society of America
Two tone bursts, one to each ear, were separated by an interval, t1. Observers had to adjust the time interval, t2, between two subsequent tone bursts so as to match that interval. Each 86-dB-SPL burst was 20 msec long and was shaped with... more
Two tone bursts, one to each ear, were separated by an interval, t1. Observers had to adjust the time interval, t2, between two subsequent tone bursts so as to match that interval. Each 86-dB-SPL burst was 20 msec long and was shaped with a 2.5-msec rise time and a 10-msec fall time. The leading edges of the standard tone burst pair were 40 msec apart. The frequency of the first three bursts, f1, was the same whereas that of the forth burst was f1−Δf. The geometric mean of the two frequencies was held constant at 1 kHz. As in the monaural case [Divenyi and Hirsh, J. Acoust. Soc. Amer. 52, 166(A) (1972)], the adjusted duration of t2 becomes 3–8 msec shorter than t1 as Δf increases to the third octave. This discrepancy is all but eliminated when the intensity of the fourth tone burst is lowered by 30–40 dB. [Supported by NINDS Grant No. NS03856.]
2025, The Journal of the Acoustical Society of America
Well-trained subjects identified with remarkable accuracy the temporal order of three contiguous pure tones of different frequencies, presented as a single sequence. For a given set of three frequencies covering a one-octave span, the... more
Well-trained subjects identified with remarkable accuracy the temporal order of three contiguous pure tones of different frequencies, presented as a single sequence. For a given set of three frequencies covering a one-octave span, the minimum duration of each component tone necessary for absolute identification of a sequence was 2–7 msec on the average. A total frequency range narrower than 1/3–2/3 octave depressed the identifiability of temporal order, whereas increasing the frequency range beyond this limit had no appreciable effect. Simple harmonic relation between the three components was associated with higher identification performace than was a complex harmonic relation. In general, those temporal orders in which the frequency change was unidirectional were more easily identified than the others. Also, the highest and the lowest of the three tones in final position were recognized more often than any of the sequences and were, most probably, used by the observers as cues for ...
2025, Perception & Psychophysics
Experienced observers were asked to identify, in a four-level 2AFC situation, the longer of two unfilled time intervals, each of which was marked by a pair of 20-msec acoustic pulses. When all the markers were identical, high-level (86-dB... more
Experienced observers were asked to identify, in a four-level 2AFC situation, the longer of two unfilled time intervals, each of which was marked by a pair of 20-msec acoustic pulses. When all the markers were identical, high-level (86-dB SPL) bursts of coherently gated sinusoids or bursts of band-limited Gaussian noise, a change in the spectrum of the markers generally did not affect performance. On the other hand, for I-kHz tone-burst markers, intensity decreases below 25 dB SL were accompanied by sizable deterioration of the discrimination performance, especially at short (25-msec) base intervals. Similarly large changes in performance were observed also when the two tonal markers of each interval were made very dissimilar from each other, either in frequency (frequency difference larger than 1 octave) or in intensity (level of the first marker at least 45 dB below the level of the second marker). Time-difference thresholds in these two latter cases were found to be nonmonotonically related to the base interval, the minima occurring between 40-and 80-msec onset separations.
2025, Perception & Psychophysics
2025, Perception & Psychophysics
Trained subjects were asked to identify the temporal order of three 2O-msec tones (891, 1,000, and 1,118 Hz), which were immediately followed by a fourth tone. It was found that this added tone, irrelevant to the observer's task,... more
Trained subjects were asked to identify the temporal order of three 2O-msec tones (891, 1,000, and 1,118 Hz), which were immediately followed by a fourth tone. It was found that this added tone, irrelevant to the observer's task, decreased the identifiability of the preceding three-tone pattern, as compared with that of the same pattern in isolation. Such a blanking of the memory of the three-tone sequence was most effective when the frequency of the fourth tone was either identical to that of the first pattern tone or when it lay 1/6-1/3 octave above the highest pattern frequency. The blanking effect was strongest when the duration of the fourth tone was equal to that of the pattern components.
2025, Psychonomic Bulletin & Review
Recent research has revealed that the presence of irrelevant visual information during retrieval of long-term memories diminishes recollection of task-relevant visual details. Here, we explored the impact of irrelevant auditory... more
Recent research has revealed that the presence of irrelevant visual information during retrieval of long-term memories diminishes recollection of task-relevant visual details. Here, we explored the impact of irrelevant auditory information on remembering task-relevant visual details by probing recall of the same previously viewed images while participants were in complete silence, exposed to white noise, or exposed to ambient sounds recorded at a busy café. The presence of auditory distraction diminished objective recollection of goal-relevant details, relative to the silence and white noise conditions. Critically, a comparison with results from a previous study using visual distractors showed equivalent effects for auditory and visual distraction. These findings suggest that disruption of recollection by external stimuli is a domain-general phenomenon produced by interference between resourcelimited, top-down mechanisms that guide the selection of mnemonic details and control processes that mediate our interactions with external distractors.
2025, Journal of Voice
Objectives/Hypothesis. To verify the discriminatory ability of human and synthesized voice samples. Study Design. This is a prospective study. Methods. A total of 70 subjects, 20 voice specialist speech-language pathologists (V-SLPs), 20... more
Objectives/Hypothesis. To verify the discriminatory ability of human and synthesized voice samples. Study Design. This is a prospective study. Methods. A total of 70 subjects, 20 voice specialist speech-language pathologists (V-SLPs), 20 general SLPs (G-SLPs), and 30 naive listeners (NLs) participated of a listening task that was simply to classify the stimuli as human or synthesized. Samples of 36 voices, 18 human and 18 synthesized vowels, male and female (9 each), with different type and degree of deviation, were presented with 50% of repetition to verify intrarater consistency. Human voices were collected from a vocal clinic database. Voice disorders were simulated by perturbations of vocal frequency, jitter (roughness), additive noise (breathiness) and by increasing tension and decreasing separation of the vocal folds (strain). Results. The average amount of error considering all groups was 37.8%, 31.9% for V-SLP, 39.3% for G-SLP, and 40.8% for NL. V-SLP had smaller mean percentage error for synthesized (24.7%), breathy (36.7%), synthesized breathy (30.8%), and tense (25%) and female (27.5%) voices. G-SLP and NL presented equal mean percentage error for all voices classification. All groups together presented no difference on the mean percentage error between human and synthesized voices (P value ¼ 0.452). Conclusions. The quality of synthesized samples was very high. V-SLP presented a lower amount of error, which allows us to infer that auditory training assists on vocal analysis tasks.
2025, Journal of Voice
This study aimed to verify whether the resonant voice based on Lessac's Y-Buzz can be perceived by listeners as resonant and different from habitual voice and to compare them to determine whether this sound exploration improves the vocal... more
This study aimed to verify whether the resonant voice based on Lessac's Y-Buzz can be perceived by listeners as resonant and different from habitual voice and to compare them to determine whether this sound exploration improves the vocal production. Nine newly graduated actors, six men and three women without voice complaints, were the subjects. They received a session of Lessac's Y-Buzz training from the primary investigator. Before training, they were asked to sustain the vowel /i/ at comfortable frequency and habitual loudness. After training, they were requested to sustain the Y-Buzz they had learned at a comfortable frequency and habitual loudness. Three speech-language pathologists (SLP) trained in voice developed an auditory-perceptive analysis. The pre-and posttraining voice samples were randomly spliced together, edited, and presented in pairs to perceptual judges who were asked to identify the most resonant of the pair. The voice samples were also acoustically compared through the Hoarseness Diagram and acoustic measures using the VoxMetria Software (CTS, version 2.0s, Brazil). The Y-Buzz trials were identified as resonant voice in 74% of the comparisons. The acoustic measures showed a statistically significant decrease of irregularity (P 5 0.002) and shimmer (P 5 0.38). The Hoarseness Diagram demonstrated how the resonant voice moved toward the normality for irregularity and noise components. The results showed that the resonant voice based on the Y-Buzz can be identified as resonant and different from normal voicing in the same subject, and it apparently implies a better vocal production demonstrating a significant decrease of shimmer and irregularity through the Hoarseness Diagram evaluation.
2025, Journal of Experimental Biology
The ability of cod, Gadus morhua (L.), and haddock, Melanogrammus aeglefinus (L.), to discriminate changes in sound direction and amplitude was studied using a cardiac conditioning technique. In one experiment it was found that the... more
The ability of cod, Gadus morhua (L.), and haddock, Melanogrammus aeglefinus (L.), to discriminate changes in sound direction and amplitude was studied using a cardiac conditioning technique. In one experiment it was found that the masking effect of noise transmitted from one sound projector on the ability of the fish to detect a tone (60–380 Hz) transmitted from another projector was reduced by 7 dB when the angle between the projectors was 45° or greater. It was also shown that the fish could be conditioned to a change in the direction of a pulsed tone switched between two projectors. The fish were able to discriminate changes in sound amplitude of 1·3–9·5 dB at frequencies between 50 and 380 Hz. The results are discussed in relation to sound localization in fish.
2025, The Journal of the Acoustical Society of America
Previous studies of frequency selectivity have suggested a strong positive correlation between age and the width of auditory filters [Patterson et al., J. Acoust. Soc. Am. 72, 1788–1803 (1982)]. However, given that absolute thresholds are... more
Previous studies of frequency selectivity have suggested a strong positive correlation between age and the width of auditory filters [Patterson et al., J. Acoust. Soc. Am. 72, 1788–1803 (1982)]. However, given that absolute thresholds are generally higher in older listeners, it is unclear whether the broader filter shapes are a consequence of aging per se or are associated with changes in absolute sensitivity. To dissociate the effects of hearing loss and increased age on changes in frequency selectivity, this study measured auditory filter shapes at 2 kHz in (1) normal-hearing young subjects; (2) elderly (over age 65) subjects with normal 2-kHz thresholds; (3) young subjects with 2-kHz thresholds elevated either 20 or 40 dB by a narrow-band masker; and (4) elderly subjects with varying degrees of hearing loss at 2 kHz. Rounded exponential filter shapes were derived from the data using the method described by Patterson [J. Acoust. Soc. Am. 59, 640–654 (1976)]. Equivalent rectangular...
2025, Frontiers in neurology
Galvanic vestibular stimulation (GVS) delivered as zero-mean current noise (noisy GVS) has been shown to improve static and dynamic postural stability probably by enhancing vestibular information. The purpose of this study was to examine... more
Galvanic vestibular stimulation (GVS) delivered as zero-mean current noise (noisy GVS) has been shown to improve static and dynamic postural stability probably by enhancing vestibular information. The purpose of this study was to examine the effect of an imperceptible level noisy GVS on ocular vestibular-evoked myogenic potentials (oVEMPs) in response to bone-conducted vibration (BCV). oVEMPs to BCV were measured during the application of white noise GVS with an amplitude ranging from 0 to 300 µA [in root mean square (RMS)] in 20 healthy subjects. Artifacts in the oVEMPs caused by GVS were reduced by inverting the waveforms of noisy GVS in the later half of the stimulus from the one in the early half. We examined the amplitudes of N1 and N1-P1 and their latencies. Noisy GVS significantly increased the N1 and N1-P1 amplitudes (p < 0.05) whereas it had no significant effects on N1 or P1 latencies (p > 0.05). Noisy GVS had facilitatory effects in 79% of ears. The amplitude of the...
2025, Developmental science
Noise typically induces both peripheral and central masking of an auditory target. Whereas the idea that a deficit of speech in noise perception is inherent to dyslexia is still debated, most studies have actually focused on the... more
Noise typically induces both peripheral and central masking of an auditory target. Whereas the idea that a deficit of speech in noise perception is inherent to dyslexia is still debated, most studies have actually focused on the peripheral contribution to the dyslexics' difficulties of perceiving speech in noise. Here, we investigated the respective contribution of both peripheral and central noise in three groups of children: dyslexic, chronological age matched controls (CA), and reading-level matched controls (RL). In all noise conditions, dyslexics displayed significantly lower performance than CA controls. However, they performed similarly or even better than RL controls. Scrutinizing individual profiles failed to reveal a strong consistency in the speech perception difficulties experienced across all noise conditions, or across noise conditions and reading-related performances. Taken together, our results thus suggest that both peripheral and central interference contribute...
2025, Vision Research
The fission illusion is induced by multisensory (audio-visual) integration. In the present study, we assume that perceptual efficiency affects the fission illusion's rate because this illusion occurs in a short temporal range through the... more
The fission illusion is induced by multisensory (audio-visual) integration. In the present study, we assume that perceptual efficiency affects the fission illusion's rate because this illusion occurs in a short temporal range through the integration of visual and auditory information. The present study examined the effect of perceptual efficiency on the fission illusion by presenting visual patterns with various degrees of complexity. The results indicated that it was more difficult to induce the fission illusion when more complex visual patterns were used. The effect of pattern on the illusion differed according to the stimulus onset asynchrony between the first visual stimulus and the second auditory stimulus. These results suggest that the fission illusion has a higher probability of occurring when the perceptual process of the first visual stimulus is completed and integrated with the first beep before the presentation of the second beep. Thus, the audio-visual integration is affected by the perceptual efficiency of the physical stimuli.
2025, Seeing and Perceiving
High-definition multimodal displays are necessary to advance information and communications technologies. Such systems mainly present audio-visual information because this sensory information includes rich spatiotemporal information.... more
High-definition multimodal displays are necessary to advance information and communications technologies. Such systems mainly present audio-visual information because this sensory information includes rich spatiotemporal information. Recently, not only audio-visual information but also other sensory information, for example touch, smell, and vibration, has come to be presented easily. The potential of such information is expanded to realize high-definition multimodal displays. We specifically examined the effects of full body vibration information on perceived reality from audio-visual content. As indexes of perceived reality, we used the sense of presence and the sense of verisimilitude. The latter is the appreciative role of foreground components in multimodal contents, although the former is related more closely to background components included in a scene. Our previous report described differences of characteristics of both senses to audio-visual contents (Kanda et al., IMRF2011). In the present experiments, various amounts of full body vibration were presented with an audio-visual movie, which was recorded via a camera and microphone set on wheelchair. Participants reported the amounts of perceived sense of presence and verisimilitude. Results revealed that the intensity of full body vibration characterized both senses differently. The sense of presence increased linearly according to the intensity of full body vibration, while the sense of verisimilitude showed a nonlinear tendency. These results suggest that not only audio-visual information but also full body vibration is importantto develop high-definition multimodal displays.
2025, I-perception
A fission illusion (also named a double-flash illusion) is a famous phenomenon of audio-visual interaction, in which a single brief flash is perceived as two flashes when presented simultaneously with two brief beeps (Shames, Kamitani, &... more
A fission illusion (also named a double-flash illusion) is a famous phenomenon of audio-visual interaction, in which a single brief flash is perceived as two flashes when presented simultaneously with two brief beeps (Shames, Kamitani, & Shimojo, 2000; 2002). The fission illusion has been investigated using relatively simple visual stimuli like single circle. Thus the illusion has not been examined by using complex visual stimuli. Markovic & Gvozdenovic (2001) reported that the processing of complex visual stimuli tends to be delayed. Therefore, the complexity of visual stimuli may affect the occurrence rate of the fission illusion, since this illusion is generated in the process that copes with visual and auditory stimuli in a short time. The present study examined the differences in illusory occurrence rates by manipulating the complexity of visual stimuli. We used the patterns proposed by Garner & Clement (1963) to control the complexity. The results indicated that it was more difficult to induce the fission illusion by using complex visual stimuli than it was by using simple stimuli. Thus, the present study suggested that the occurrence rate of the fission illusion differed depending on the perceptual efficiency in the coding process of visual stimuli.
2025, I-perception
We found that a hand posture with the palms together located just below the stream/bounce display could increase the proportion of bouncing perception. This effect, called the hands-induced bounce (HIB) effect, did not occur in the... more
We found that a hand posture with the palms together located just below the stream/bounce display could increase the proportion of bouncing perception. This effect, called the hands-induced bounce (HIB) effect, did not occur in the hands-cross condition or in the one-hand condition. By using rubber hands or covering the participants' hands with a cloth, we demonstrated that the visual information of the hand shapes was not a critical factor in producing the HIB effect, whereas proprioceptive information seemed to be important. We also found that the HIB effect did not occur when the participants' hands were far from the coincidence point, suggesting that the HIB effect might be produced within a limited spatial area around the hands.
2025, Seeing and Perceiving
The aim of this study is to investigate whether or not spatial congruency between tactile and auditory stimuli would influence the tactile roughness discrimination of stimuli presented to the fingers or cheeks. In the experiment, when... more
The aim of this study is to investigate whether or not spatial congruency between tactile and auditory stimuli would influence the tactile roughness discrimination of stimuli presented to the fingers or cheeks. In the experiment, when abrasive films were passively presented to the participants, white noise bursts were simultaneously presented from the same or different side, either near or far from the head. The results showed that when white noise was presented from the same side as the tactile stimuli, especially from near the head, the discrimination sensitivity on the cheeks was higher than when sound was absent or presented from a different side. A similar pattern was observed in discrimination by the fingers but it was not significant. The roughness discrimination by the fingers was also influenced by the presentation of sound close to the head, but significant differences between conditions with and without sounds were observed at the decisional level. Thus, the spatial congruency between tactile and auditory information selectively modulated the roughness sensitivity of the skin on the cheek, especially when the sound source was close to the head.
2025, Brain research. Cognitive brain research
The modulation of the somatosensory N140 was examined in a selective attention task where a control condition was applied and the interstimulus interval (ISI) was varied. Electrical stimuli were randomly presented to the left index... more
The modulation of the somatosensory N140 was examined in a selective attention task where a control condition was applied and the interstimulus interval (ISI) was varied. Electrical stimuli were randomly presented to the left index (p=0.4) and middle fingers (p=0.1), and right index (p=0.4) and middle fingers (p=0.1). In the attend-right condition, subjects were instructed to count silently the number of infrequent target stimuli presented to the right middle finger, and to the left middle finger in the attend-left condition. They had no task in the control condition. Each condition was performed with two different sets of ISI (mean 400 vs. 800 ms). The somatosensory N140 elicited by frequent standard stimuli was analyzed. The N140 amplitude was larger for the attended ERP compared to the control and unattended ERPs. This attention effect was more marked at the frontal electrodes compared to the temporal electrodes contralateral to the stimulation side. Furthermore, the attention ef...
2025, Journal of the American Society for Information Science
Based on Mulkay's and Kuhn's models of change in sci entific structure, a scientific communication model of the emergence of a hybrid research area was developed and tested in the field of developmental dyslexia. Data included co-citation... more
Based on Mulkay's and Kuhn's models of change in sci entific structure, a scientific communication model of the emergence of a hybrid research area was developed and tested in the field of developmental dyslexia. Data included co-citation data on 74 dyslexia researchers at three points in time, who-to-whom communication net work data, survey responses, resumes, association and biographical sources, online reference and citation databases, publications, grant databases, and telephone in terviews. Researchers were partitioned into ''blocks'' of similar scientists on the basis of co-citation and commu nication relations, compared on selected network-level and individual-level characteristics in order to validate block labels, and situated historically in the politics and advances surrounding the problem area. Results show support for Mulkay's model of branching instead of Kuhn's model of scientific revolution. Evidence points to divergence rather than convergence among the related research areas, but suggests the need for longitudinal follow-up in order to rule out the impact of the inertia of aggregate co-citation data. Implications for theory, methodology, and research are discussed.
2025, Sworld-Us Conference proceedings
Духовые инструменты в истории музыкальной культуры прошли сложный и долгий путь своей эволюции. Модификация тех или иных духовых инструментов обеспечивала музыкантам более комфортное исполнение на музыкальном инструменте, позволяла... more
Духовые инструменты в истории музыкальной культуры прошли сложный и долгий путь своей эволюции. Модификация тех или иных духовых инструментов обеспечивала музыкантам более комфортное исполнение на музыкальном инструменте, позволяла максимально раскрыть их художественный потенциал. Это оказывало существенное влияние на активизацию творчества композиторов для реализации своих творческих идей. Среди разнообразия духовых музыкальных инструментов особое место принадлежит трубе. Её яркий тембр, градации динамического звучания, техническая подвижность обеспечивают её максимальное функционирование в инструментальных ансамблях и оркестрах (симфонических, народных, духовых, эстрадных). Статья посвящена исследованию проблем и основных компонентов процесса организации концертноисполнительской деятельности ансамбля трубачей. Результаты, полученные автором в ходе проведённого исследования, авторские статьи, указанные в списке литературы, значительно дополнят информацию о развитии ансамблевой формы исполнительства на трубе и представят интерес для специалистов музыкального искусства. Ключевые слова: духовое искусство; труба; ансамбли духовых инструментов; ансамбль трубачей: его разновидность и формы; концертная деятельность.
2025, Physics of Plasmas
It is found that multipole moments of toroidal current are given by the coefficients of the Legendre polynomial expansion of the magnetic field on a meridian contour. Using this fact and the orthogonality of the Legendre polynomials, a... more
It is found that multipole moments of toroidal current are given by the coefficients of the Legendre polynomial expansion of the magnetic field on a meridian contour. Using this fact and the orthogonality of the Legendre polynomials, a method is proposed for evaluating the moments from magnetic field measurements through an open-contour. As an application, exact expressions of the first few order moments and the current center position are formulated. Results show that this method is applicable to any aspect ratio tokamaks without the limitation of small displacement of the current center.
2025
Selective attention contributes to perceptual efficiency by modulating cortical activity according to task demands. The majority of attentional research has focused on the effects of attention to a single modality, and little is known... more
Selective attention contributes to perceptual efficiency by modulating cortical activity according to task demands. The majority of attentional research has focused on the effects of attention to a single modality, and little is known about the role of attention in multimodal sensory processing. Here we employ a novel experimental design to examine the electrophysiological basis of audio-visual attention shifting. We use electroencephalography (EEG) to study differences in brain dynamics between quickly shifting attention between modalities and focusing attention on a single modality for extended periods of time. We also address interactions between attentional effects generated by the attention-shifting cue and those generated by subsequent stimuli. The conclusions from these examinations address key issues in attentional research, including the supramodal theory of attention, or the role of attention in foveal vision. The experimental design and analysis methods used here may suggest new directions in the study of the physiological basis of attention.
2025, The Journal of experimental biology
̃a discuss the impact of a series of classic papers presenting the discovery of the owl auditory map published by Mark Konishi and Eric Knudsen in the 1970s.
2025, Journal of Neurobiology
The KCNC1 (previously Kv3.1) potassium channel, a delayed rectifier with a high threshold of activation, is highly expressed in the time coding nuclei of the adult chicken and barn owl auditory brainstem. The proposed role of KCNC1... more
The KCNC1 (previously Kv3.1) potassium channel, a delayed rectifier with a high threshold of activation, is highly expressed in the time coding nuclei of the adult chicken and barn owl auditory brainstem. The proposed role of KCNC1 currents in auditory neurons is to reduce the width of the action potential and enable neurons to transmit high frequency temporal information with little jitter. Because developmental changes in potassium currents are critical for the maturation of the shape of the action potential, we used immunohistochemical methods to examine the developmental expression of KCNC1 subunits in the avian auditory brainstem. The KCNC1 gene gives rise to two splice variants, a longer KCNC1b and a shorter KCNC1a that differ at the carboxy termini. Two antibodies were used: an antibody to the N‐terminus that does not distinguish between KCNC1a and b isoforms, denoted as panKCNC1, and another antibody that specifically recognizes the C terminus of KCNC1b. A comparison of the ...
2025
An impulse response of an enclosed reverberant space is composed of three basic components: the direct sound, early reflections and late reverberation. While the direct sound is a single event that can be easily identified, the division... more
An impulse response of an enclosed reverberant space is composed of three basic components: the direct sound, early reflections and late reverberation. While the direct sound is a single event that can be easily identified, the division between the early reflections and late reverberation is less obvious as there is a gradual transition between the two. This paper explores two statistical measures that can aid in determining a point in time where the early reflections have transitioned into late reverberation. These metrics exploit the similarities between late reverberation and Gaussian noise that are not commonly found in early reflections. Unlike other measures, these need no prior knowledge about the rooms such as geometry or volume.
2025
Reverberation, a ubiquitous feature of real-world acoustic environments, exhibits statistical regularities that human listeners leverage to self-orient, facilitate auditory perception, and understand their environment. Despite the... more
Reverberation, a ubiquitous feature of real-world acoustic environments, exhibits statistical regularities that human listeners leverage to self-orient, facilitate auditory perception, and understand their environment. Despite the extensive research on sound source representation in the auditory system, it remains unclear how the brain represents real-world reverberant environments. Here, we characterized the neural response to reverberation of varying realism by applying multivariate pattern analysis to electroencephalographic (EEG) brain signals. Human listeners (12 males and 8 females) heard speech samples convolved with real-world and synthetic reverberant impulse responses and judged whether the speech samples were in a "real" or "fake" environment, focusing on the reverberant background rather than the properties of speech itself. Participants distinguished real from synthetic reverberation with ∼75% accuracy; EEG decoding reveals a multistage decoding time course, with dissociable components early in the stimulus presentation and later in the perioffset stage. The early component predominantly occurred in temporal electrode clusters, while the later component was prominent in centroparietal clusters. These findings suggest distinct neural stages in perceiving natural acoustic environments, likely reflecting sensory encoding and higher-level perceptual decision-making processes. Overall, our findings provide evidence that reverberation, rather than being largely suppressed as a noise-like signal, carries relevant environmental information and gains representation along the auditory system. This understanding also offers various applications; it provides insights for including reverberation as a cue to aid navigation for blind and visually impaired people. It also helps to enhance realism perception in immersive virtual reality settings, gaming, music, and film production.
2025, Journal of Personality and Social Psychology
This study tested the hypothesis that minimal cues from a model (i.e., information about changes in the heart rate of a model interpreted by an observer as caused by either noxious or innocuous antecedents) are sufficient to produce... more
This study tested the hypothesis that minimal cues from a model (i.e., information about changes in the heart rate of a model interpreted by an observer as caused by either noxious or innocuous antecedents) are sufficient to produce vicarious classical conditioning effects. The design used four groups of 12 subjects. Three groups of subjects heard the heart beats of a model who was ostensibly being shocked during a period of white noise which followed a tone. A fourth group thought the noise was caused by a slide projector. Among those subjects hearing a model being "shocked," one third heard a change of heart rate after each shock, one third heard no change in heart rate, and the remaining third were a sensitization control group. The subjects' heart rate was recorded, and a postexperimental questionnaire was administered. A pronounced and significant decelerative cardiac response was found in the interstimulus interval for the experimental condition as compared to the three control groups combined. Thus, vicarious conditioning effects were obtained using only the model's heart rate as a cue to his emotional response. 1 This study is based on a dissertation submitted to the University of Michigan in partial fulfillment of the requirements for the PhD degree. The author expresses her appreciation to Robert B. Zajonc, dissertation chairman, for his support and encouragement. The author also thanks Arnie Braver and Steve Palms who acted as confederates in this study.
2025, Brain Research
Positive allosteric modulators (PAMs) for the α7 nicotinic receptor hold promise for the treatment of sensory inhibition deficits observed in schizophrenia patients. Studies of these compounds in the DBA/2 mouse, which models the... more
Positive allosteric modulators (PAMs) for the α7 nicotinic receptor hold promise for the treatment of sensory inhibition deficits observed in schizophrenia patients. Studies of these compounds in the DBA/2 mouse, which models the schizophrenia-related deficit in sensory inhibition, have shown PAMs to be effective in improving the deficit. However, the first published clinical trial of a PAM for both sensory inhibition deficits and related cognitive difficulties failed, casting a shadow on this therapeutic approach. The present study used both DBA/2 mice, and C3H Chrna7 heterozygote mice to assess the ability of the α7 PAM, PNU-120596, to improve sensory inhibition. Both of these strains of mice have reduced hippocampal α7 nicotinic receptor numbers and deficient sensory inhibition similar to schizophrenia patients. Low doses of PNU-120596 (1 or 3.33 mg/kg) were effective in the DBA/2 mouse but not the C3H Chrna7 heterozygote mouse. Moderate doses of the selective α7 nicotinic receptor agonist, choline chloride (10 or 33 mg/kg), were also ineffective in improving sensory inhibition in the C3H Chrna7 heterozygote mouse. However, combining the lowest doses of both PNU-120596 and choline chloride in this mouse model did improve sensory inhibition. We propose here that the difference in efficacy of PNU-120596 between the 2 mouse strains is driven by differences in hippocampal α7 nicotinic receptor numbers, such that C3H Chrna7 heterozygote mice require additional direct stimulation of the α7 receptors. These data may have implications for further clinical testing of putative α7 nicotinic receptor PAMs.
2025, Quarterly Journal of Experimental Psychology
In the present study, we tested three hypotheses that account for after-effects of response inhibition and goal shifting: the goal-shifting hypothesis, the reaction time (RT) adjustment hypothesis, and the stimulus -goal association... more
In the present study, we tested three hypotheses that account for after-effects of response inhibition and goal shifting: the goal-shifting hypothesis, the reaction time (RT) adjustment hypothesis, and the stimulus -goal association hypothesis. To distinguish between the hypotheses, we examined performance in the stop-change paradigm and the dual-task paradigm. In the stop-change paradigm, we found that responding on no-signal trials slowed down when a stop-change signal was presented on the previous trial. Similarly, in the dual-task paradigm, we found that responding on no-signal trials slowed down when a dual-task signal was presented on the previous trial. However, aftereffects of unsuccessful inhibition or dual-task performance were observed only when the stimulus of the previous trial was repeated. These results are consistent with stimulus -goal association hypothesis, which assumes that the stimulus is associated with the different task goals on signal trials; when the stimulus is repeated, the tasks goal are retrieved, and interference occurs.
2025, Brain Research
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared... more
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks.
2025, Brain
Auditory verbal hallucinations (hearing voices) are typically associated with psychosis, but a minority of the general population also experience them frequently and without distress. Such 'non-clinical' experiences offer a rare and... more
Auditory verbal hallucinations (hearing voices) are typically associated with psychosis, but a minority of the general population also experience them frequently and without distress. Such 'non-clinical' experiences offer a rare and unique opportunity to study hallucinations apart from confounding clinical factors, thus allowing for the identification of symptom-specific mechanisms. Recent theories propose that hallucinations result from an imbalance of prior expectation and sensory information, but whether such an imbalance also influences auditory-perceptual processes remains unknown. We examine for the first time the cortical processing of ambiguous speech in people without psychosis who regularly hear voices. Twelve non-clinical voice-hearers and 17 matched controls completed a functional magnetic resonance imaging scan while passively listening to degraded speech ('sine-wave' speech), that was either potentially intelligible or unintelligible. Voice-hearers reported recognizing the presence of speech in the stimuli before controls, and before being explicitly informed of its intelligibility. Across both groups, intelligible sine-wave speech engaged a typical left-lateralized speech processing network. Notably, however, voice-hearers showed stronger intelligibility responses than controls in the dorsal anterior cingulate cortex and in the superior frontal gyrus. This suggests an enhanced involvement of attention and sensorimotor processes, selectively when speech was potentially intelligible. Altogether, these behavioural and neural findings indicate that people with hallucinatory experiences show distinct responses to meaningful auditory stimuli. A greater weighting towards prior knowledge and expectation might cause non-veridical auditory sensations in these individuals, but it might also spontaneously facilitate perceptual processing where such knowledge is required. This has implications for the understanding of hallucinations in clinical and non-clinical populations, and is consistent with current 'predictive processing' theories of psychosis.
2025, Cerebral cortex (New York, N.Y. : 1991)
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the... more
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representationa...
2025, Cognition and Emotion
It is well established that categorizing the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly... more
It is well established that categorizing the emotional content of facial expressions may differ depending on contextual information. Whether this malleability is observed in the auditory domain and in genuine emotion expressions is poorly explored. We examined the perception of authentic laughter and crying in the context of happy, neutral and sad facial expressions. Participants rated the vocalizations on separate unipolar scales of happiness and sadness, and on arousal. Although they were instructed to focus exclusively on the vocalizations, consistent context effects were found: For both laughter and crying, emotion judgments were shifted towards the information expressed by the face. These modulations were independent of response latencies and were larger for more emotionally ambiguous vocalizations. No effects of context were found for arousal ratings. These findings suggest that the automatic encoding of contextual information during emotion perception generalizes across modalities, to purely nonverbal vocalizations, and is not confined to acted expressions.
2025
Loudness normalisation has been heralded as a tonic for the loudness wars. In this paper we propose that a side effect of its implementation may be a greater awareness of sound quality. This side effect is explored through an analysis of... more
Loudness normalisation has been heralded as a tonic for the loudness wars. In this paper we propose that a side effect of its implementation may be a greater awareness of sound quality. This side effect is explored through an analysis of the manner in which music is listened to under this new paradigm. It is concluded that the conditions necessary for sound quality judgments have been provided but that the existing preference for hypercompression may affect the de-escalation of its use in the pop music industry. The aesthetic concerns of hypercompression are examined in order to determine the sonic trade-offs or perceived benefits inherent in the application of hyper-compression. Factors considered include: (i) loss of excitement or emotion, (ii) audition bias in listening environments, (iii) hyper-compression as an aesthetic preference, (iv) the increased cognitive load of hyper-compression, and (v) the ability of loudness variation to define musical structures. The findings sugges...
2025
Twenty-two criterion referenced and standardized tests commonly used to diagnose (central) auditory processing disorders were evaluated for both diagnostic accuracy and test validity. Tests were evaluated for evidence of diagnostic... more
Twenty-two criterion referenced and standardized tests commonly used to diagnose (central) auditory processing disorders were evaluated for both diagnostic accuracy and test validity. Tests were evaluated for evidence of diagnostic accuracy, level of acceptability of any identified diagnostic accuracy, and test validity for those tests with reported levels of diagnostic accuracy. Criteria for test validity were modified from McCauley and Swisher (1984) and McCauley (1996). Results indicated that 45% of reviewed tests had published evidence of diagnostic accuracy, although only 23% of tests met criteria for acceptable levels of both sensitivity and specificity. Evaluation of test validity indicated strengths in procedural aspects of test administration and weaknesses in various aspects of reliability and validity. Because sufficient evidence to support the reliability and validity of many (C)APD tests is not available in published data, findings indicated a clear need for educational...