Florencia Assaneo - Academia.edu (original) (raw)

Papers by Florencia Assaneo

Research paper thumbnail of Adaptive oscillators provide a hard-coded Bayesian mechanism for rhythmic inference

Bayesian theories of perception suggest that the human brain internalizes a model of environmenta... more Bayesian theories of perception suggest that the human brain internalizes a model of environmental patterns to reduce sensory noise and improve stimulus processing. The internalization of external regularities is particularly manifest in the time domain: humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear. Here, we propose that these seemingly antagonistic accounts can be conceptually reconciled. In this view, neural oscillators may constitute hard-coded physiological priors – in a Bayesian sense – that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. To test this, we asked human participants to track pseudo-rhythmic tone...

Research paper thumbnail of Amplitude modulation perceptually distinguishes music and speech

Music and speech are complex and distinct auditory signals that are both foundational to the huma... more Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, how little acoustic information is in fact required to distinguish between them remains an open question. Here we test the hypothesis that a sound’s amplitude modulation (AM) is a critical acoustic feature. In contrast to paradigms using ecologically valid, complex acoustic signals (that can be challenging to interpret), we use an aggressively reductionist approach: if AM rate and AM regularity are critical for perceptually distinguishing music and speech, the judgement on artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across four experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech and lower AM as music, especially among musically sophisticated listeners. In addition, noise signals with more regular AM are judged as ...

Research paper thumbnail of The Relationship Between Auditory-Motor Integration, Interoceptive Awareness, and Self-Reported Stuttering Severity

Frontiers in Integrative Neuroscience, May 6, 2022

Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from... more Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from non-stutterers. While neurodevelopmental disorders impacted by timing are associated with compromised auditory-motor integration and interoception, the interplay between those abilities and stuttering remains unexplored. Here, we studied the relationships between speech auditory-motor synchronization (a proxy for auditory-motor integration), interoceptive awareness, and self-reported stuttering severity using remotely delivered assessments. Results indicate that in general, stutterers and non-stutterers exhibit similar auditory-motor integration and interoceptive abilities. However, while speech auditory-motor synchrony (i.e., integration) and interoceptive awareness were not related, speech synchrony was inversely related to the speaker's perception of stuttering severity as perceived by others, and interoceptive awareness was inversely related to self-reported stuttering impact. These findings support claims that stuttering is a heterogeneous, multi-faceted disorder such that uncorrelated auditory-motor integration and interoception measurements predicted different aspects of stuttering, suggesting two unrelated sources of timing differences associated with the disorder.

Research paper thumbnail of Decoding imagined speech reveals speech planning and production mechanisms

Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a f... more Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a fundamental ability linked to cognitive functions such as inner speech, phonological working memory, and predictive processing. Speech imagery is also considered an ideal tool to test theories of overt speech. The study of speech imagery is challenging, primarily because of the absence of overt behavioral output as well as the difficulty in temporally aligning imagery events across trials and individuals. We used magnetoencephalography (MEG) paired with temporal-generalization-based neural decoding and a simple behavioral protocol to determine the processing stages underlying speech imagery. We monitored participants’ lip and jaw micromovements during mental imagery of syllable production using electromyography. Decoding participants’ imagined syllables revealed a sequence of task-elicited representations. Importantly, participants’ micromovements did not discriminate between syllables. T...

Research paper thumbnail of Corrigendum: The Lateralization of Speech-Brain Coupling Is Differentially Modulated by Intrinsic Auditory and Top-Down Mechanisms

Frontiers in Integrative Neuroscience, 2020

The authors apologize for this error and state that this does not change the scientific conclusio... more The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.

Research paper thumbnail of The Lateralization of Speech-Brain Coupling Is Differentially Modulated by Intrinsic Auditory and Top-Down Mechanisms

Frontiers in Integrative Neuroscience, 2019

The lateralization of neuronal processing underpinning hearing, speech, language, and music is wi... more The lateralization of neuronal processing underpinning hearing, speech, language, and music is widely studied, vigorously debated, and still not understood in a satisfactory manner. One set of hypotheses focuses on the temporal structure of perceptual experience and links auditory cortex asymmetries to underlying differences in neural populations with differential temporal sensitivity (e.g., ideas advanced by Zatorre et al. (2002) and Poeppel (2003). The Asymmetric Sampling in Time theory (AST) (Poeppel, 2003), builds on cytoarchitectonic differences between auditory cortices and predicts that modulation frequencies within the range of, roughly, the syllable rate, are more accurately tracked by the right hemisphere. To date, this conjecture is reasonably well supported, since-while there is some heterogeneity in the reported findings-the predicted asymmetrical entrainment has been observed in various experimental protocols. Here, we show that under specific processing demands, the rightward dominance disappears. We propose an enriched and modified version of the asymmetric sampling hypothesis in the context of speech. Recent work (Rimmele et al., 2018b) proposes two different mechanisms to underlie the auditory tracking of the speech envelope: one derived from the intrinsic oscillatory properties of auditory regions; the other induced by top-down signals coming from other nonauditory regions of the brain. We propose that under non-speech listening conditions, the intrinsic auditory mechanism dominates and thus, in line with AST, entrainment is rightward lateralized, as is widely observed. However, (i) depending on individual brain structural/functional differences, and/or (ii) in the context of specific speech listening conditions, the relative weight of the top-down mechanism can increase. In this scenario, the typically observed auditory sampling asymmetry (and its rightward dominance) diminishes or vanishes.

Research paper thumbnail of Auditory-motor synchronization varies among individuals and is critically shaped by acoustic features

Communications Biology

The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundam... more The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundamental trait in humans at the core of speech and music. Despite the long trajectory of the study of such ability, little attention has been paid to how acoustic features of the stimuli and individual differences can modulate auditory-motor synchrony. Here, by exploring auditory-motor synchronization abilities across different effectors and types of stimuli, we revealed that this capability is more restricted than previously assumed. While the general population can synchronize to sequences composed of the repetitions of the same acoustic unit, the synchrony in a subgroup of participants is impaired when the unit’s identity varies across the sequence. In addition, synchronization in this group can be temporarily restored by being primed by a facilitator stimulus. Auditory-motor integration is stable across effectors, supporting the hypothesis of a central clock mechanism subserving the dif...

Research paper thumbnail of Differential activation of a frontoparietal network explains population-level differences in statistical learning from speech

PLOS Biology

People of all ages display the ability to detect and learn from patterns in seemingly random stim... more People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory–motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory–motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech duri...

Research paper thumbnail of Speech-to-Speech Synchronization protocol to classify human participants as high or low auditory-motor synchronizers

STAR Protocols, 2022

Summary The ability to synchronize a motor action to a rhythmic auditory stimulus is often consid... more Summary The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).

Research paper thumbnail of Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers

Frontiers in Neuroscience, 2021

Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and spe... more Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported Ge...

Research paper thumbnail of The Anatomy of Onomatopoeia

Virtually every human faculty engage with imitation. One of the most natural and unexplored objec... more Virtually every human faculty engage with imitation. One of the most natural and unexplored objects for the study of the mimetic elements in language is the onomatopoeia, as it implies an imitative-driven transformation of a sound of nature into a word. Notably, simple sounds are transformed into complex strings of vowels and consonants, making difficult to identify what is acoustically preserved in this operation. In this work we propose a definition for vocal imitation by which sounds are transformed into the speech elements that minimize their spectral difference within the constraints of the vocal system. In order to test this definition, we use a computational model that allows recovering anatomical features of the vocal system from experimental sound data. We explore the vocal configurations that best reproduce non-speech sounds, like striking blows on a door or the sharp sounds generated by pressing on light switches or computer mouse buttons. From the anatomical point of vie...

Research paper thumbnail of Repita la sílaba “ta” y le diremos cómo funciona su cerebro

Research paper thumbnail of Modelado del sistema vocal humano y su aplicación a estudios de percepción y producción de habla

Desde el punto de vista biologico el proceso del habla puede separarse en dos etapas moduladas en... more Desde el punto de vista biologico el proceso del habla puede separarse en dos etapas moduladas entre si: la produccion y la percepcion. En este trabajo nos ocupamos de ambas, concentrandonos especialmente en la primera. El sistema vocal humano esta formado por dos grandes bloques: las cuerdas vocales y el tracto vocal. Las cuerdas vocales constituyen la fuente acustica, determinando la entonacion del discurso, mientras que el contenido fonetico (los sonidos propios de la lengua) es definido por la dinamica del tracto vocal. En esta tesis presentamos un modelo completo de produccion vocal, incluyendo el estudio dinamico de un modelo detallado de cuerdas vocales y su adaptacion a un modelo de baja dimension del tracto vocal. Para evaluar la calidad de la voz sintetizada con el modelo, utilizamos una combinacion de test perceptuales y de resonancia magnetica funcional, cuyos resultados muestran que la voz sintetica es indistinguible de segmentos de voz real. Los sintetizadores basados ...

Research paper thumbnail of MEG and Language

This text is adapted from MEG and Language by the same authors, submitted to an issue of Neuroima... more This text is adapted from MEG and Language by the same authors, submitted to an issue of Neuroimaging Clinics on Magnetoencephalography, edited by Drs. Roland Lee and Mingxiong Huang All authors contributed equally to this work Synopsis We provide an introductory overview of research that uses magnetoencephalography (MEG) to understand the brain basis of human language. The cognitive processes and brain networks that have been implicated in written and spoken language comprehension and production are discussed in relation to different methodologies: we briefly review event-related brain responses, research on the coupling of neural oscillations to speech, oscillatory coupling between brain regions (e.g., auditory-motor coupling), and neural decoding approaches in naturalistic language comprehension. We end with a short section on the clinical relevance of MEG language research, focusing on dyslexia and specific language impairment.

Research paper thumbnail of Preferred auditory temporal processing regimes and auditory-motor synchronization

Psychonomic Bulletin & Review

Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underl... more Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underlying neuronal-processing mechanisms. Oscillatory theories suggest the existence of one optimal perceptual performance regime at auditory stimulation rates in the delta to theta range (< 10 Hz), but reduced performance in the alpha range (10–14 Hz) is controversial. Additionally, the widely discussed motor system contribution to timing remains unclear. We measured rate discrimination thresholds between 4 and 15 Hz, and auditory-motor coupling strength was estimated through a behavioral auditory-motor synchronization task. In a Bayesian model comparison, high auditory-motor synchronizers showed a larger range of constant optimal temporal judgments than low synchronizers, with performance decreasing in the alpha range. This evidence for optimal processing in the theta range is consistent with preferred oscillatory regimes in auditory cortex that compartmentalize stimulus encoding and pro...

Research paper thumbnail of Discrete Motor Coordinates for Vowel Production

Current models of human vocal production that capture peripheral dynamics in speech require large... more Current models of human vocal production that capture peripheral dynamics in speech require large dimensional measurements of the neural activity, which are mapped into equally complex motor gestures. In this work we present a motor description for vowels as points in a discrete low-dimensional space. We monitor the dynamics of 3 points at the oral cavity using Hall-effect transducers and magnets, describing the resulting signals during normal utterances in terms of active/inactive patterns that allow a robust vowel classification in an abstract binary space. We use simple matrix algebra to link this representation to the anatomy of the vocal tract and to recent reports of highly tuned neuronal activations for vowel production, suggesting a plausible global strategy for vowel codification and motor production.

Research paper thumbnail of Speaking rhythmically can shape hearing

Nature Human Behaviour

Evidence suggests that temporal predictions arising from the motor system can enhance auditory pe... more Evidence suggests that temporal predictions arising from the motor system can enhance auditory perception. However, in speech perception, we lack evidence of perception being modulated by production. Here we show a behavioural protocol that captures the existence of such auditory–motor interactions. Participants performed a syllable discrimination task immediately after producing periodic syllable sequences. Two speech rates were explored: a ‘natural’ (individually preferred) and a fixed ‘non-natural’ (2 Hz) rate. Using a decoding approach, we show that perceptual performance is modulated by the stimulus phase determined by a participant’s own motor rhythm. Remarkably, for ‘natural’ and ‘non-natural’ rates, this finding is restricted to a subgroup of the population with quantifiable auditory–motor coupling. The observed pattern is compatible with a neural model assuming a bidirectional interaction of auditory and speech motor cortices. Crucially, the model matches the experimental results only if it incorporates individual differences in the strength of the auditory–motor connection. Assaneo et al. show that speech production timing can facilitate perception. Individuals differed in whether they utilized motor timing depending on the auditory–motor cortex connection strength.

Research paper thumbnail of Neural oscillations are a start toward understanding brain activity rather than the end

PLOS Biology

Does rhythmic neural activity merely echo the rhythmic features of the environment, or does it re... more Does rhythmic neural activity merely echo the rhythmic features of the environment, or does it reflect a fundamental computational mechanism of the brain? This debate has generated a series of clever experimental studies attempting to find an answer. Here, we argue that the field has been obstructed by predictions of oscillators that are based more on intuition rather than biophysical models compatible with the observed phenomena. What follows is a series of cautionary examples that serve as reminders to ground our hypotheses in well-developed theories of oscillatory behavior put forth by theoretical study of dynamical systems. Ultimately, our hope is that this exercise will push the field to concern itself less with the vague question of “oscillation or not” and more with specific biophysical models that can be readily tested.

Research paper thumbnail of Speech rhythms and their neural foundations

Nature Reviews Neuroscience

Research paper thumbnail of Preferred auditory temporal processing regimes and auditory-motor interactions

Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underl... more Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underlying neuronal processing mechanisms. Oscillatory theories suggest the existence of one optimal perceptual performance regime at auditory stimulation rates in the delta to theta range (<10 Hz), but reduced performance in the alpha range (10-14 Hz) is controversial. Additionally, the widely discussed motor system contribution to timing remains unclear. We measured rate discrimination thresholds between 4-15 Hz, and auditory-motor coupling strength was estimated through auditory-motor synchronization. In a Bayesian model comparison, high auditory-motor synchronizers showed a larger range of constant optimal temporal judgments than low synchronizers, with performance decreasing in the alpha range. This evidence for optimal auditory processing in the theta range is consistent with preferred oscillatory regimes in auditory cortex that compartmentalize stimulus encoding and processing. The f...

Research paper thumbnail of Adaptive oscillators provide a hard-coded Bayesian mechanism for rhythmic inference

Bayesian theories of perception suggest that the human brain internalizes a model of environmenta... more Bayesian theories of perception suggest that the human brain internalizes a model of environmental patterns to reduce sensory noise and improve stimulus processing. The internalization of external regularities is particularly manifest in the time domain: humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear. Here, we propose that these seemingly antagonistic accounts can be conceptually reconciled. In this view, neural oscillators may constitute hard-coded physiological priors – in a Bayesian sense – that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. To test this, we asked human participants to track pseudo-rhythmic tone...

Research paper thumbnail of Amplitude modulation perceptually distinguishes music and speech

Music and speech are complex and distinct auditory signals that are both foundational to the huma... more Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, how little acoustic information is in fact required to distinguish between them remains an open question. Here we test the hypothesis that a sound’s amplitude modulation (AM) is a critical acoustic feature. In contrast to paradigms using ecologically valid, complex acoustic signals (that can be challenging to interpret), we use an aggressively reductionist approach: if AM rate and AM regularity are critical for perceptually distinguishing music and speech, the judgement on artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across four experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech and lower AM as music, especially among musically sophisticated listeners. In addition, noise signals with more regular AM are judged as ...

Research paper thumbnail of The Relationship Between Auditory-Motor Integration, Interoceptive Awareness, and Self-Reported Stuttering Severity

Frontiers in Integrative Neuroscience, May 6, 2022

Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from... more Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from non-stutterers. While neurodevelopmental disorders impacted by timing are associated with compromised auditory-motor integration and interoception, the interplay between those abilities and stuttering remains unexplored. Here, we studied the relationships between speech auditory-motor synchronization (a proxy for auditory-motor integration), interoceptive awareness, and self-reported stuttering severity using remotely delivered assessments. Results indicate that in general, stutterers and non-stutterers exhibit similar auditory-motor integration and interoceptive abilities. However, while speech auditory-motor synchrony (i.e., integration) and interoceptive awareness were not related, speech synchrony was inversely related to the speaker's perception of stuttering severity as perceived by others, and interoceptive awareness was inversely related to self-reported stuttering impact. These findings support claims that stuttering is a heterogeneous, multi-faceted disorder such that uncorrelated auditory-motor integration and interoception measurements predicted different aspects of stuttering, suggesting two unrelated sources of timing differences associated with the disorder.

Research paper thumbnail of Decoding imagined speech reveals speech planning and production mechanisms

Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a f... more Speech imagery (the ability to generate internally quasi-perceptual experiences of speech) is a fundamental ability linked to cognitive functions such as inner speech, phonological working memory, and predictive processing. Speech imagery is also considered an ideal tool to test theories of overt speech. The study of speech imagery is challenging, primarily because of the absence of overt behavioral output as well as the difficulty in temporally aligning imagery events across trials and individuals. We used magnetoencephalography (MEG) paired with temporal-generalization-based neural decoding and a simple behavioral protocol to determine the processing stages underlying speech imagery. We monitored participants’ lip and jaw micromovements during mental imagery of syllable production using electromyography. Decoding participants’ imagined syllables revealed a sequence of task-elicited representations. Importantly, participants’ micromovements did not discriminate between syllables. T...

Research paper thumbnail of Corrigendum: The Lateralization of Speech-Brain Coupling Is Differentially Modulated by Intrinsic Auditory and Top-Down Mechanisms

Frontiers in Integrative Neuroscience, 2020

The authors apologize for this error and state that this does not change the scientific conclusio... more The authors apologize for this error and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.

Research paper thumbnail of The Lateralization of Speech-Brain Coupling Is Differentially Modulated by Intrinsic Auditory and Top-Down Mechanisms

Frontiers in Integrative Neuroscience, 2019

The lateralization of neuronal processing underpinning hearing, speech, language, and music is wi... more The lateralization of neuronal processing underpinning hearing, speech, language, and music is widely studied, vigorously debated, and still not understood in a satisfactory manner. One set of hypotheses focuses on the temporal structure of perceptual experience and links auditory cortex asymmetries to underlying differences in neural populations with differential temporal sensitivity (e.g., ideas advanced by Zatorre et al. (2002) and Poeppel (2003). The Asymmetric Sampling in Time theory (AST) (Poeppel, 2003), builds on cytoarchitectonic differences between auditory cortices and predicts that modulation frequencies within the range of, roughly, the syllable rate, are more accurately tracked by the right hemisphere. To date, this conjecture is reasonably well supported, since-while there is some heterogeneity in the reported findings-the predicted asymmetrical entrainment has been observed in various experimental protocols. Here, we show that under specific processing demands, the rightward dominance disappears. We propose an enriched and modified version of the asymmetric sampling hypothesis in the context of speech. Recent work (Rimmele et al., 2018b) proposes two different mechanisms to underlie the auditory tracking of the speech envelope: one derived from the intrinsic oscillatory properties of auditory regions; the other induced by top-down signals coming from other nonauditory regions of the brain. We propose that under non-speech listening conditions, the intrinsic auditory mechanism dominates and thus, in line with AST, entrainment is rightward lateralized, as is widely observed. However, (i) depending on individual brain structural/functional differences, and/or (ii) in the context of specific speech listening conditions, the relative weight of the top-down mechanism can increase. In this scenario, the typically observed auditory sampling asymmetry (and its rightward dominance) diminishes or vanishes.

Research paper thumbnail of Auditory-motor synchronization varies among individuals and is critically shaped by acoustic features

Communications Biology

The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundam... more The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundamental trait in humans at the core of speech and music. Despite the long trajectory of the study of such ability, little attention has been paid to how acoustic features of the stimuli and individual differences can modulate auditory-motor synchrony. Here, by exploring auditory-motor synchronization abilities across different effectors and types of stimuli, we revealed that this capability is more restricted than previously assumed. While the general population can synchronize to sequences composed of the repetitions of the same acoustic unit, the synchrony in a subgroup of participants is impaired when the unit’s identity varies across the sequence. In addition, synchronization in this group can be temporarily restored by being primed by a facilitator stimulus. Auditory-motor integration is stable across effectors, supporting the hypothesis of a central clock mechanism subserving the dif...

Research paper thumbnail of Differential activation of a frontoparietal network explains population-level differences in statistical learning from speech

PLOS Biology

People of all ages display the ability to detect and learn from patterns in seemingly random stim... more People of all ages display the ability to detect and learn from patterns in seemingly random stimuli. Referred to as statistical learning (SL), this process is particularly critical when learning a spoken language, helping in the identification of discrete words within a spoken phrase. Here, by considering individual differences in speech auditory–motor synchronization, we demonstrate that recruitment of a specific neural network supports behavioral differences in SL from speech. While independent component analysis (ICA) of fMRI data revealed that a network of auditory and superior pre/motor regions is universally activated in the process of learning, a frontoparietal network is additionally and selectively engaged by only some individuals (high auditory–motor synchronizers). Importantly, activation of this frontoparietal network is related to a boost in learning performance, and interference with this network via articulatory suppression (AS; i.e., producing irrelevant speech duri...

Research paper thumbnail of Speech-to-Speech Synchronization protocol to classify human participants as high or low auditory-motor synchronizers

STAR Protocols, 2022

Summary The ability to synchronize a motor action to a rhythmic auditory stimulus is often consid... more Summary The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).

Research paper thumbnail of Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers

Frontiers in Neuroscience, 2021

Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and spe... more Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported Ge...

Research paper thumbnail of The Anatomy of Onomatopoeia

Virtually every human faculty engage with imitation. One of the most natural and unexplored objec... more Virtually every human faculty engage with imitation. One of the most natural and unexplored objects for the study of the mimetic elements in language is the onomatopoeia, as it implies an imitative-driven transformation of a sound of nature into a word. Notably, simple sounds are transformed into complex strings of vowels and consonants, making difficult to identify what is acoustically preserved in this operation. In this work we propose a definition for vocal imitation by which sounds are transformed into the speech elements that minimize their spectral difference within the constraints of the vocal system. In order to test this definition, we use a computational model that allows recovering anatomical features of the vocal system from experimental sound data. We explore the vocal configurations that best reproduce non-speech sounds, like striking blows on a door or the sharp sounds generated by pressing on light switches or computer mouse buttons. From the anatomical point of vie...

Research paper thumbnail of Repita la sílaba “ta” y le diremos cómo funciona su cerebro

Research paper thumbnail of Modelado del sistema vocal humano y su aplicación a estudios de percepción y producción de habla

Desde el punto de vista biologico el proceso del habla puede separarse en dos etapas moduladas en... more Desde el punto de vista biologico el proceso del habla puede separarse en dos etapas moduladas entre si: la produccion y la percepcion. En este trabajo nos ocupamos de ambas, concentrandonos especialmente en la primera. El sistema vocal humano esta formado por dos grandes bloques: las cuerdas vocales y el tracto vocal. Las cuerdas vocales constituyen la fuente acustica, determinando la entonacion del discurso, mientras que el contenido fonetico (los sonidos propios de la lengua) es definido por la dinamica del tracto vocal. En esta tesis presentamos un modelo completo de produccion vocal, incluyendo el estudio dinamico de un modelo detallado de cuerdas vocales y su adaptacion a un modelo de baja dimension del tracto vocal. Para evaluar la calidad de la voz sintetizada con el modelo, utilizamos una combinacion de test perceptuales y de resonancia magnetica funcional, cuyos resultados muestran que la voz sintetica es indistinguible de segmentos de voz real. Los sintetizadores basados ...

Research paper thumbnail of MEG and Language

This text is adapted from MEG and Language by the same authors, submitted to an issue of Neuroima... more This text is adapted from MEG and Language by the same authors, submitted to an issue of Neuroimaging Clinics on Magnetoencephalography, edited by Drs. Roland Lee and Mingxiong Huang All authors contributed equally to this work Synopsis We provide an introductory overview of research that uses magnetoencephalography (MEG) to understand the brain basis of human language. The cognitive processes and brain networks that have been implicated in written and spoken language comprehension and production are discussed in relation to different methodologies: we briefly review event-related brain responses, research on the coupling of neural oscillations to speech, oscillatory coupling between brain regions (e.g., auditory-motor coupling), and neural decoding approaches in naturalistic language comprehension. We end with a short section on the clinical relevance of MEG language research, focusing on dyslexia and specific language impairment.

Research paper thumbnail of Preferred auditory temporal processing regimes and auditory-motor synchronization

Psychonomic Bulletin & Review

Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underl... more Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underlying neuronal-processing mechanisms. Oscillatory theories suggest the existence of one optimal perceptual performance regime at auditory stimulation rates in the delta to theta range (< 10 Hz), but reduced performance in the alpha range (10–14 Hz) is controversial. Additionally, the widely discussed motor system contribution to timing remains unclear. We measured rate discrimination thresholds between 4 and 15 Hz, and auditory-motor coupling strength was estimated through a behavioral auditory-motor synchronization task. In a Bayesian model comparison, high auditory-motor synchronizers showed a larger range of constant optimal temporal judgments than low synchronizers, with performance decreasing in the alpha range. This evidence for optimal processing in the theta range is consistent with preferred oscillatory regimes in auditory cortex that compartmentalize stimulus encoding and pro...

Research paper thumbnail of Discrete Motor Coordinates for Vowel Production

Current models of human vocal production that capture peripheral dynamics in speech require large... more Current models of human vocal production that capture peripheral dynamics in speech require large dimensional measurements of the neural activity, which are mapped into equally complex motor gestures. In this work we present a motor description for vowels as points in a discrete low-dimensional space. We monitor the dynamics of 3 points at the oral cavity using Hall-effect transducers and magnets, describing the resulting signals during normal utterances in terms of active/inactive patterns that allow a robust vowel classification in an abstract binary space. We use simple matrix algebra to link this representation to the anatomy of the vocal tract and to recent reports of highly tuned neuronal activations for vowel production, suggesting a plausible global strategy for vowel codification and motor production.

Research paper thumbnail of Speaking rhythmically can shape hearing

Nature Human Behaviour

Evidence suggests that temporal predictions arising from the motor system can enhance auditory pe... more Evidence suggests that temporal predictions arising from the motor system can enhance auditory perception. However, in speech perception, we lack evidence of perception being modulated by production. Here we show a behavioural protocol that captures the existence of such auditory–motor interactions. Participants performed a syllable discrimination task immediately after producing periodic syllable sequences. Two speech rates were explored: a ‘natural’ (individually preferred) and a fixed ‘non-natural’ (2 Hz) rate. Using a decoding approach, we show that perceptual performance is modulated by the stimulus phase determined by a participant’s own motor rhythm. Remarkably, for ‘natural’ and ‘non-natural’ rates, this finding is restricted to a subgroup of the population with quantifiable auditory–motor coupling. The observed pattern is compatible with a neural model assuming a bidirectional interaction of auditory and speech motor cortices. Crucially, the model matches the experimental results only if it incorporates individual differences in the strength of the auditory–motor connection. Assaneo et al. show that speech production timing can facilitate perception. Individuals differed in whether they utilized motor timing depending on the auditory–motor cortex connection strength.

Research paper thumbnail of Neural oscillations are a start toward understanding brain activity rather than the end

PLOS Biology

Does rhythmic neural activity merely echo the rhythmic features of the environment, or does it re... more Does rhythmic neural activity merely echo the rhythmic features of the environment, or does it reflect a fundamental computational mechanism of the brain? This debate has generated a series of clever experimental studies attempting to find an answer. Here, we argue that the field has been obstructed by predictions of oscillators that are based more on intuition rather than biophysical models compatible with the observed phenomena. What follows is a series of cautionary examples that serve as reminders to ground our hypotheses in well-developed theories of oscillatory behavior put forth by theoretical study of dynamical systems. Ultimately, our hope is that this exercise will push the field to concern itself less with the vague question of “oscillation or not” and more with specific biophysical models that can be readily tested.

Research paper thumbnail of Speech rhythms and their neural foundations

Nature Reviews Neuroscience

Research paper thumbnail of Preferred auditory temporal processing regimes and auditory-motor interactions

Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underl... more Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underlying neuronal processing mechanisms. Oscillatory theories suggest the existence of one optimal perceptual performance regime at auditory stimulation rates in the delta to theta range (<10 Hz), but reduced performance in the alpha range (10-14 Hz) is controversial. Additionally, the widely discussed motor system contribution to timing remains unclear. We measured rate discrimination thresholds between 4-15 Hz, and auditory-motor coupling strength was estimated through auditory-motor synchronization. In a Bayesian model comparison, high auditory-motor synchronizers showed a larger range of constant optimal temporal judgments than low synchronizers, with performance decreasing in the alpha range. This evidence for optimal auditory processing in the theta range is consistent with preferred oscillatory regimes in auditory cortex that compartmentalize stimulus encoding and processing. The f...