Mikko Sams - Profile on Academia.edu (original) (raw)
Papers by Mikko Sams
Hippocampus-centered network is associated with positive symptom alleviation in first-episode psychosis patients
Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, Jun 1, 2023
Social Cognitive and Affective Neuroscience, Apr 17, 2019
We are constantly categorizing other people as belonging to our in-group ('one of us') or out-gro... more We are constantly categorizing other people as belonging to our in-group ('one of us') or out-group ('one of them'). Such grouping occurs fast and automatically and can be based on others' visible characteristics such as skin color or clothing style. Here we studied neural underpinnings of implicit social grouping not often visible on the face, male sexual orientation. A total of 14 homosexuals and 15 heterosexual males were scanned in functional magnetic resonance imaging while watching a movie about a homosexual man, whose face was also presented subliminally before (subjects did not know about the character's sexual orientation) and after the movie. We discovered significantly stronger activation to the man's face after seeing the movie in homosexual but not heterosexual subjects in medial prefrontal cortex, frontal pole, anterior cingulate cortex, right temporal parietal junction and bilateral superior frontal gyrus. In previous research, these brain areas have been connected to social perception, self-referential thinking, empathy, theory of mind and in-group perception. In line with previous studies showing biased perception of in-/out-group faces to be context dependent, our novel approach further demonstrates how complex contextual knowledge gained under naturalistic viewing can bias implicit social perception.
bioRxiv (Cold Spring Harbor Laboratory), Apr 16, 2018
When listening to a narrative, the verbal expressions translate into meanings and flow of mental ... more When listening to a narrative, the verbal expressions translate into meanings and flow of mental imagery. However, the same narrative can be heard quite differently based on differences in listeners' previous experiences and knowledge. We capitalized on such differences to disclose brain regions that support transformation of narrative into individualized propositional meanings and associated mental imagery by analyzing brain activity associated with behaviorally assessed individual meanings elicited by a narrative.
NeuroImage, Apr 1, 2016
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct ... more Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Common brain areas activated by hearing and seeing speech
Primary auditory cortex activation by visual speech
Recent studies have yielded contradictory evidence on whether visual speech perception (watching ... more Recent studies have yielded contradictory evidence on whether visual speech perception (watching articulatory gestures) can activate the human primary auditory cortex. To circumvent confounds due to inter-individual anatomical variation, we defined our subjects' Heschl's gyri and assessed blood oxygenation-dependent signal changes at 3 T within this confined region during visual speech perception and observation of moving circles. Visual speech perception activated Heschl's gyri in nine subjects, with activation in seven of them extending to the area of primary auditory cortex. Activation was significantly stronger during visual speech perception than during observation of the moving circles. Further, a significant hemisphere by stimulus interaction occurred, suggesting left Heschl's gyrus specialization for visual speech processing.
NeuroImage, 2016
Spatial and non-spatial information of sound events is presumably processed in parallel auditory ... more Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magnetoand electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150 ms) activity in posterior ACs, spreading to left anterior ACs at 250-450 ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8 Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32 Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
To listen and to talk: Auditory M100 response shifts posteriorly when perceiving phonemes before speaking
HAL (Le Centre pour la Communication Scientifique Directe), Nov 11, 2010
One of the most fundamental questions in speech perception research is how properties of the acou... more One of the most fundamental questions in speech perception research is how properties of the acoustic speech signal are mapped to linguistic elements such as phonemes. Distinct theories have been proposed to answer this question. A crucial distinction among these theories can be put in the form of a simple question: does the speech motor system have a role in speech perception? From this view, recent studies postulate that the anterior auditory cortex "what" processing pathway would be involved in acoustic-phonetic decoding while posterior auditory cortex "where/how" stream would underlie a sensorimotor mapping between auditory representations and articulatory motor representations. In return, this motor-related activity is hypothesized to constrain phonetic interpretation of the sensory inputs through the internal generation of candidate articulatory categorizations. Consistent with such perceptual-motor interactions in speech perception, we hypothesized that the "where/how" processing pathway would be more engaged when perceiving speech stimuli before producing them compared to passively listening to the same stimuli. Using magnetoencephalography we tested whether equivalent current dipole (ECD) source location estimate (which approximates the center of gravity of neural activity) of the so-called M100 response recorded about 100 ms from the auditory speech stimulus onset would shift posteriorly when subjects are perceiving phonemes and subsequently perform a speech production task, compared to a pure passive perception task. Ten healthy volunteers were presented the same syllables with two levels of ambiguity (presented with or without auditory noise) in four different conditions: passive perception, passive perception and overt repetition, passive perception and covert repetition, and passive perception and overt imitation. In the three last 'motor' conditions, the task of the subjects was to perceive the phoneme first, then wait for visual signal, and perform the speech production task. Compared to the passive speech perception condition, results showed a significant shift of the ECD-estimated location of M100 response to the phoneme sounds to a more posterior position in the left hemisphere during the motor tasks. This demonstrates that perceiving speech before speaking induces a stronger involvement of the "where/how" processing pathway and therefore suggests that sensorimotor interactions during speech perception are dependent on the exact content of the task.
Primary Auditory Cotex Activation by Lip-reading: an fMRI Study at 3 Tesla
Author response for "Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading
Only a few of us are skilled lipreaders while most struggle at the task. To illuminate the poorly... more Only a few of us are skilled lipreaders while most struggle at the task. To illuminate the poorly understood neural substrate of this variability, we estimated the similarity of brain activity during lipreading, listening, and reading of the same 8-min narrative with subjects whose lipreading skill varied extensively. The similarity of brain activity was estimated by voxel-wise comparison of the BOLD signal time courses. Inter-subject correlation of the time courses revealed that lipreading and listening are supported by the same brain areas in temporal, parietal and frontal cortices, precuneus and cerebellum. However, lipreading activated only a small part of the neural network that is active during listening/reading the narrative, demonstrating that neural processing during lipreading vs. listening/reading differs substantially. Importantly, skilled lipreading was specifically associated with bilateral activity in the superior and middle temporal cortex, which also encode auditory speech. Our novel results both confirm previous results from few previous studies using isolated speech segments as stimuli but also extend in an important way understanding of neural mechanisms of lipreading. .
Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading
Brain and behavior, Dec 29, 2022
NeuroImage, Aug 1, 2017
During a conversation or when listening to music, auditory and visual information are combined au... more During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.
Brain Sciences
Perception of the same narrative can vary between individuals depending on a listener’s previous ... more Perception of the same narrative can vary between individuals depending on a listener’s previous experiences. We studied whether and how cultural family background may shape the processing of an audiobook in the human brain. During functional magnetic resonance imaging (fMRI), 48 healthy volunteers from two different cultural family backgrounds listened to an audiobook depicting the intercultural social life of young adults with the respective cultural backgrounds. Shared cultural family background increased inter-subject correlation of hemodynamic activity in the left-hemispheric Heschl’s gyrus, insula, superior temporal gyrus, lingual gyrus and middle temporal gyrus, in the right-hemispheric lateral occipital and posterior cingulate cortices as well as in the bilateral middle temporal gyrus, middle occipital gyrus and precuneus. Thus, cultural family background is reflected in multiple areas of speech processing in the brain and may also modulate visual imagery. After neuroimaging...
Design Science, 2020
Empathic design highlights the relevance of understanding users and their circumstances in order ... more Empathic design highlights the relevance of understanding users and their circumstances in order to obtain good design outcomes. However, theory-based quantitative methods, which can be used to test user understanding, are hard to find in the design science literature. Here, we introduce a validated method used in social psychological research – the empathic accuracy method – into design to explore how well two designers perform in a design task and whether the designers’ empathic accuracy performance and the physiological synchrony between the two designers and a group of users can predict the designers’ success in two design tasks. The designers could correctly identify approximately 50% of the users’ reported mental content. We did not find a significant correlation between the designers’ empathic accuracy and their (1) performance in design tasks and (2) physiological synchrony with users. Nevertheless, the empathic accuracy method is promising in its attempts to quantify the ef...
Social Cognitive and Affective Neuroscience, 2020
Putting oneself into the shoes of others is an important aspect of social cognition. We measured ... more Putting oneself into the shoes of others is an important aspect of social cognition. We measured brain hemodynamic activity and eye-gaze patterns while participants were viewing a shortened version of the movie ‘My Sister’s Keeper’ from two perspectives: that of a potential organ donor, who violates moral norms by refusing to donate her kidney, and that of a potential organ recipient, who suffers in pain. Inter-subject correlation (ISC) of brain activity was significantly higher during the potential organ donor’s perspective in dorsolateral and inferior prefrontal, lateral and inferior occipital, and inferior–anterior temporal areas. In the reverse contrast, stronger ISC was observed in superior temporal, posterior frontal and anterior parietal areas. Eye-gaze analysis showed higher proportion of fixations on the potential organ recipient during both perspectives. Taken together, these results suggest that during social perspective-taking different brain areas can be flexibly recrui...
Social Cognitive and Affective Neuroscience, 2018
People socialized in different cultures differ in their thinking styles. Eastern-culture people v... more People socialized in different cultures differ in their thinking styles. Eastern-culture people view objects more holistically by taking context into account, whereas Western-culture people view objects more analytically by focusing on them at the expense of context. Here we studied whether participants, who have different thinking styles but live within the same culture, exhibit differential brain activity when viewing a drama movie. A total of 26 Finnish participants, who were divided into holistic and analytical thinkers based on self-report questionnaire scores, watched a shortened drama movie during functional magnetic resonance imaging. We compared intersubject correlation (ISC) of brain hemodynamic activity of holistic vs analytical participants across the movie viewings. Holistic thinkers showed significant ISC in more extensive cortical areas than analytical thinkers, suggesting that they perceived the movie in a more similar fashion. Significantly higher ISC was observed in holistic thinkers in occipital, prefrontal and temporal cortices. In analytical thinkers, significant ISC was observed in right-hemisphere fusiform gyrus, temporoparietal junction and frontal cortex. Since these results were obtained in participants with similar cultural background, they are less prone to confounds by other possible cultural differences. Overall, our results show how brain activity in holistic vs analytical participants differs when viewing the same drama movie.
PloS one, 2017
Seeing an action may activate the corresponding action motor code in the observer. It remains unr... more Seeing an action may activate the corresponding action motor code in the observer. It remains unresolved whether seeing and performing an action activates similar action-specific motor codes in the observer and the actor. We used novel hyperclassification approach to reveal shared brain activation signatures of action execution and observation in interacting human subjects. In the first experiment, two "actors" performed four types of hand actions while their haemodynamic brain activations were measured with 3-T functional magnetic resonance imaging (fMRI). The actions were videotaped and shown to 15 "observers" during a second fMRI experiment. Eleven observers saw the videos of one actor, and the remaining four observers saw the videos of the other actor. In a control fMRI experiment, one of the actors performed actions with closed eyes, and five new observers viewed these actions. Bayesian canonical correlation analysis was applied to functionally realign obser...
Brain and Behavior, 2017
We examined which brain areas are involved in the comprehension of acoustically distorted speech ... more We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.
NeuroImage, 2017
During a conversation or when listening to music, auditory and visual information are combined au... more During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.
Hippocampus-centered network is associated with positive symptom alleviation in first-episode psychosis patients
Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, Jun 1, 2023
Social Cognitive and Affective Neuroscience, Apr 17, 2019
We are constantly categorizing other people as belonging to our in-group ('one of us') or out-gro... more We are constantly categorizing other people as belonging to our in-group ('one of us') or out-group ('one of them'). Such grouping occurs fast and automatically and can be based on others' visible characteristics such as skin color or clothing style. Here we studied neural underpinnings of implicit social grouping not often visible on the face, male sexual orientation. A total of 14 homosexuals and 15 heterosexual males were scanned in functional magnetic resonance imaging while watching a movie about a homosexual man, whose face was also presented subliminally before (subjects did not know about the character's sexual orientation) and after the movie. We discovered significantly stronger activation to the man's face after seeing the movie in homosexual but not heterosexual subjects in medial prefrontal cortex, frontal pole, anterior cingulate cortex, right temporal parietal junction and bilateral superior frontal gyrus. In previous research, these brain areas have been connected to social perception, self-referential thinking, empathy, theory of mind and in-group perception. In line with previous studies showing biased perception of in-/out-group faces to be context dependent, our novel approach further demonstrates how complex contextual knowledge gained under naturalistic viewing can bias implicit social perception.
bioRxiv (Cold Spring Harbor Laboratory), Apr 16, 2018
When listening to a narrative, the verbal expressions translate into meanings and flow of mental ... more When listening to a narrative, the verbal expressions translate into meanings and flow of mental imagery. However, the same narrative can be heard quite differently based on differences in listeners' previous experiences and knowledge. We capitalized on such differences to disclose brain regions that support transformation of narrative into individualized propositional meanings and associated mental imagery by analyzing brain activity associated with behaviorally assessed individual meanings elicited by a narrative.
NeuroImage, Apr 1, 2016
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct ... more Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Common brain areas activated by hearing and seeing speech
Primary auditory cortex activation by visual speech
Recent studies have yielded contradictory evidence on whether visual speech perception (watching ... more Recent studies have yielded contradictory evidence on whether visual speech perception (watching articulatory gestures) can activate the human primary auditory cortex. To circumvent confounds due to inter-individual anatomical variation, we defined our subjects' Heschl's gyri and assessed blood oxygenation-dependent signal changes at 3 T within this confined region during visual speech perception and observation of moving circles. Visual speech perception activated Heschl's gyri in nine subjects, with activation in seven of them extending to the area of primary auditory cortex. Activation was significantly stronger during visual speech perception than during observation of the moving circles. Further, a significant hemisphere by stimulus interaction occurred, suggesting left Heschl's gyrus specialization for visual speech processing.
NeuroImage, 2016
Spatial and non-spatial information of sound events is presumably processed in parallel auditory ... more Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magnetoand electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150 ms) activity in posterior ACs, spreading to left anterior ACs at 250-450 ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8 Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32 Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
To listen and to talk: Auditory M100 response shifts posteriorly when perceiving phonemes before speaking
HAL (Le Centre pour la Communication Scientifique Directe), Nov 11, 2010
One of the most fundamental questions in speech perception research is how properties of the acou... more One of the most fundamental questions in speech perception research is how properties of the acoustic speech signal are mapped to linguistic elements such as phonemes. Distinct theories have been proposed to answer this question. A crucial distinction among these theories can be put in the form of a simple question: does the speech motor system have a role in speech perception? From this view, recent studies postulate that the anterior auditory cortex "what" processing pathway would be involved in acoustic-phonetic decoding while posterior auditory cortex "where/how" stream would underlie a sensorimotor mapping between auditory representations and articulatory motor representations. In return, this motor-related activity is hypothesized to constrain phonetic interpretation of the sensory inputs through the internal generation of candidate articulatory categorizations. Consistent with such perceptual-motor interactions in speech perception, we hypothesized that the "where/how" processing pathway would be more engaged when perceiving speech stimuli before producing them compared to passively listening to the same stimuli. Using magnetoencephalography we tested whether equivalent current dipole (ECD) source location estimate (which approximates the center of gravity of neural activity) of the so-called M100 response recorded about 100 ms from the auditory speech stimulus onset would shift posteriorly when subjects are perceiving phonemes and subsequently perform a speech production task, compared to a pure passive perception task. Ten healthy volunteers were presented the same syllables with two levels of ambiguity (presented with or without auditory noise) in four different conditions: passive perception, passive perception and overt repetition, passive perception and covert repetition, and passive perception and overt imitation. In the three last 'motor' conditions, the task of the subjects was to perceive the phoneme first, then wait for visual signal, and perform the speech production task. Compared to the passive speech perception condition, results showed a significant shift of the ECD-estimated location of M100 response to the phoneme sounds to a more posterior position in the left hemisphere during the motor tasks. This demonstrates that perceiving speech before speaking induces a stronger involvement of the "where/how" processing pathway and therefore suggests that sensorimotor interactions during speech perception are dependent on the exact content of the task.
Primary Auditory Cotex Activation by Lip-reading: an fMRI Study at 3 Tesla
Author response for "Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading
Only a few of us are skilled lipreaders while most struggle at the task. To illuminate the poorly... more Only a few of us are skilled lipreaders while most struggle at the task. To illuminate the poorly understood neural substrate of this variability, we estimated the similarity of brain activity during lipreading, listening, and reading of the same 8-min narrative with subjects whose lipreading skill varied extensively. The similarity of brain activity was estimated by voxel-wise comparison of the BOLD signal time courses. Inter-subject correlation of the time courses revealed that lipreading and listening are supported by the same brain areas in temporal, parietal and frontal cortices, precuneus and cerebellum. However, lipreading activated only a small part of the neural network that is active during listening/reading the narrative, demonstrating that neural processing during lipreading vs. listening/reading differs substantially. Importantly, skilled lipreading was specifically associated with bilateral activity in the superior and middle temporal cortex, which also encode auditory speech. Our novel results both confirm previous results from few previous studies using isolated speech segments as stimuli but also extend in an important way understanding of neural mechanisms of lipreading. .
Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading
Brain and behavior, Dec 29, 2022
NeuroImage, Aug 1, 2017
During a conversation or when listening to music, auditory and visual information are combined au... more During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.
Brain Sciences
Perception of the same narrative can vary between individuals depending on a listener’s previous ... more Perception of the same narrative can vary between individuals depending on a listener’s previous experiences. We studied whether and how cultural family background may shape the processing of an audiobook in the human brain. During functional magnetic resonance imaging (fMRI), 48 healthy volunteers from two different cultural family backgrounds listened to an audiobook depicting the intercultural social life of young adults with the respective cultural backgrounds. Shared cultural family background increased inter-subject correlation of hemodynamic activity in the left-hemispheric Heschl’s gyrus, insula, superior temporal gyrus, lingual gyrus and middle temporal gyrus, in the right-hemispheric lateral occipital and posterior cingulate cortices as well as in the bilateral middle temporal gyrus, middle occipital gyrus and precuneus. Thus, cultural family background is reflected in multiple areas of speech processing in the brain and may also modulate visual imagery. After neuroimaging...
Design Science, 2020
Empathic design highlights the relevance of understanding users and their circumstances in order ... more Empathic design highlights the relevance of understanding users and their circumstances in order to obtain good design outcomes. However, theory-based quantitative methods, which can be used to test user understanding, are hard to find in the design science literature. Here, we introduce a validated method used in social psychological research – the empathic accuracy method – into design to explore how well two designers perform in a design task and whether the designers’ empathic accuracy performance and the physiological synchrony between the two designers and a group of users can predict the designers’ success in two design tasks. The designers could correctly identify approximately 50% of the users’ reported mental content. We did not find a significant correlation between the designers’ empathic accuracy and their (1) performance in design tasks and (2) physiological synchrony with users. Nevertheless, the empathic accuracy method is promising in its attempts to quantify the ef...
Social Cognitive and Affective Neuroscience, 2020
Putting oneself into the shoes of others is an important aspect of social cognition. We measured ... more Putting oneself into the shoes of others is an important aspect of social cognition. We measured brain hemodynamic activity and eye-gaze patterns while participants were viewing a shortened version of the movie ‘My Sister’s Keeper’ from two perspectives: that of a potential organ donor, who violates moral norms by refusing to donate her kidney, and that of a potential organ recipient, who suffers in pain. Inter-subject correlation (ISC) of brain activity was significantly higher during the potential organ donor’s perspective in dorsolateral and inferior prefrontal, lateral and inferior occipital, and inferior–anterior temporal areas. In the reverse contrast, stronger ISC was observed in superior temporal, posterior frontal and anterior parietal areas. Eye-gaze analysis showed higher proportion of fixations on the potential organ recipient during both perspectives. Taken together, these results suggest that during social perspective-taking different brain areas can be flexibly recrui...
Social Cognitive and Affective Neuroscience, 2018
People socialized in different cultures differ in their thinking styles. Eastern-culture people v... more People socialized in different cultures differ in their thinking styles. Eastern-culture people view objects more holistically by taking context into account, whereas Western-culture people view objects more analytically by focusing on them at the expense of context. Here we studied whether participants, who have different thinking styles but live within the same culture, exhibit differential brain activity when viewing a drama movie. A total of 26 Finnish participants, who were divided into holistic and analytical thinkers based on self-report questionnaire scores, watched a shortened drama movie during functional magnetic resonance imaging. We compared intersubject correlation (ISC) of brain hemodynamic activity of holistic vs analytical participants across the movie viewings. Holistic thinkers showed significant ISC in more extensive cortical areas than analytical thinkers, suggesting that they perceived the movie in a more similar fashion. Significantly higher ISC was observed in holistic thinkers in occipital, prefrontal and temporal cortices. In analytical thinkers, significant ISC was observed in right-hemisphere fusiform gyrus, temporoparietal junction and frontal cortex. Since these results were obtained in participants with similar cultural background, they are less prone to confounds by other possible cultural differences. Overall, our results show how brain activity in holistic vs analytical participants differs when viewing the same drama movie.
PloS one, 2017
Seeing an action may activate the corresponding action motor code in the observer. It remains unr... more Seeing an action may activate the corresponding action motor code in the observer. It remains unresolved whether seeing and performing an action activates similar action-specific motor codes in the observer and the actor. We used novel hyperclassification approach to reveal shared brain activation signatures of action execution and observation in interacting human subjects. In the first experiment, two "actors" performed four types of hand actions while their haemodynamic brain activations were measured with 3-T functional magnetic resonance imaging (fMRI). The actions were videotaped and shown to 15 "observers" during a second fMRI experiment. Eleven observers saw the videos of one actor, and the remaining four observers saw the videos of the other actor. In a control fMRI experiment, one of the actors performed actions with closed eyes, and five new observers viewed these actions. Bayesian canonical correlation analysis was applied to functionally realign obser...
Brain and Behavior, 2017
We examined which brain areas are involved in the comprehension of acoustically distorted speech ... more We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.
NeuroImage, 2017
During a conversation or when listening to music, auditory and visual information are combined au... more During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively.