Moritz Wurm | University of Trento (original) (raw)

Papers by Moritz Wurm

Research paper thumbnail of Decoding action concepts at different levels of abstraction - an fMRI MVPA study

Background / Purpose: Action concepts are abstractions of concrete action. It is assumed that abs... more Background / Purpose: Action concepts are abstractions of concrete action. It is assumed that abstraction occurs on several distinct levels. For example, “opening a bottle” is an abstraction of concrete instantiations of that action (e.g., “opening a particular wine bottle”). On a higher level of abstraction, “opening” describes the action concept independent of the object class. The neural substrates of abstraction from concrete actions to abstract action concepts are debated: Motor versions of embodied theories claim that action concepts are grounded in the motor system. By contrast, classical cognitive theories propose that action concepts consist of amodal representations in networks distinct from the motor system. Here we used cross-conditional multivoxel pattern analysis (MVPA) to decode observed actions on three levels of abstraction: a concrete level (opening vs. closing a specific bottle), an intermediate level (opening vs. closing across different bottle exemplars), and an...

Research paper thumbnail of Predicting goals in action episodes attenuates BOLD response in inferior frontal and occipitotemporal cortex.pdf

Research paper thumbnail of Objects Mediate Goal Integration in Ventrolateral Prefrontal Cortex during Action Observation

Actions performed by others are mostly not observed in isolation, but embedded in sequences of ac... more Actions performed by others are mostly not observed in isolation, but embedded in sequences of actions tied together by an overarching goal. Therefore, preceding actions can modulate the observer's expectations in relation to the currently perceived action. Ven-trolateral prefrontal cortex (vlPFC), and inferior frontal gyrus (IFG) in particular, is suggested to subserve the integration of episodic as well as semantic information and memory, including action scripts. The present fMRI study investigated if activation in IFG varies with the effort to integrate expected and observed action, even when not required by the task. During an fMRI session, participants were instructed to attend to short videos of single actions and to deliver a judgment about the actor's current goal. We manipulated the strength of goal expectation induced by the preceding action, implementing the parameter "goal-related-ness" between the preceding and the currently observed action. Moreover, since objects point to the probability of certain actions, we also manipulated whether the current and the preceding action shared at least one object or not. We found an interaction between the two factors goal-relatedness and shared object: IFG activation increased the weaker the goal-relatedness between the preceding and the current action was, but only when they shared at least one object. Here, integration of successive action steps was triggered by the reappearing (shared) object but hampered by a weak goal-relatedness between the actually observed manipulation. These findings foster the recently emerging view that IFG is enhanced by goal-related conflicts during action observation.

Research paper thumbnail of Decoding Internally and Externally Driven Movement Plans

During movement planning, brain activity within parietofrontal networks encodes information about... more During movement planning, brain activity within parietofrontal networks encodes information about upcoming actions that can be driven either externally (e.g., by a sensory cue) or internally (i.e., by a choice/decision). Here we used multivariate pattern analysis (MVPA) of fMRI data to distinguish between areas that represent (1) abstract movement plans that generalize across the way in which these were driven, (2) internally driven movement plans, or (3) externally driven movement plans. In a delayed-movement paradigm, human volunteers were asked to plan and execute three types of nonvisually guided right-handed reaching movements toward a central target object: using a precision grip, a power grip, or touching the object without hand preshaping. On separate blocks of trials, movements were either instructed via color cues (Instructed condition), or chosen by the participant (Free-Choice condition). Using ROI-based and whole-brain searchlight-based MVPA, we found abstract representations of planned movements that generalize across the way these movements are selected (internally vs externally driven) in parietal cortex, dorsal premotor cortex, and primary motor cortex contralat-eral to the acting hand. In addition, we revealed representations specific for internally driven movement plans in contralateral ventral premotor cortex, dorsolateral prefrontal cortex, supramarginal gyrus, and in ipsilateral posterior parietotemporal regions, suggesting that these regions are recruited during movement selection. Finally, we observed representations of externally driven movement plans in bilateral supplementary motor cortex and a similar trend in presupplementary motor cortex, suggesting a role in stimulus–response mapping.

Research paper thumbnail of What's she doing in the kitchen? Context helps when actions are hard to recognize

Specific spatial environments are often indicative of where certain actions may take place: In ki... more Specific spatial environments are often indicative of where certain actions may take place: In kitchens we prepare food, and in bathrooms we engage in personal hygiene, but not vice versa. In action recognition, contextual cues may constrain an observer's expectations toward actions that are more strongly associated with a particular context than others. Such cues should become particularly helpful when the action itself is difficult to recognize. However, to date only easily identifiable actions were investigated, and the effects of context on recognition were rather interfering than facilitatory. To test whether context also facilitates action recognition, we measured recognition performance of hardly identifiable actions that took place in compatible, incompatible, and neutral contextual settings. Action information was degraded by pixelizing the area of the object manipulation while the room in which the action took place remained fully visible. We found significantly higher accuracy for actions that took place in compatible compared to incompatible and neutral settings, indicating facilitation. Additionally, action recognition was slower in incompatible settings than in compatible and neutral settings, indicating interference. Together, our findings demonstrate that contextual information is effectively exploited during action observation, in particular when visual information about the action itself is sparse. Differential effects on speed and accuracy suggest that contexts modulate action recognition at different levels of processing. Our findings emphasize the importance of contextual information in comprehensive, ecologically valid models of action recognition.

Research paper thumbnail of Decoding Actions at Different Levels of Abstraction

Brain regions that mediate action understanding must contain representations that are action spec... more Brain regions that mediate action understanding must contain representations that are action specific and at the same time tolerate a wide range of perceptual variance. Whereas progress has been made in understanding such generalization mechanisms in the object domain, the neural mechanisms to conceptualize actions remain unknown. In particular, there is ongoing dissent between motor-centric and cognitive accounts whether premotor cortex or brain regions in closer relation to perceptual systems, i.e., lateral occipitotemporal cortex, contain neural populations with such mapping properties. To date, it is unclear to which degree action-specific representations in these brain regions generalize from concrete action instantiations to abstract action concepts. However, such information would be crucial to differentiate between motor and cognitive theories. Using ROI-based and searchlight-based fMRI multivoxel pattern decoding, we sought brain regions in human cortex that manage the balancing act between specificity and generality. We investigated a concrete level that distinguishes actions based on perceptual features (e.g., opening vs closing a specific bottle), an intermediate level that generalizes across movement kinematics and specific objects involved in the action (e.g., opening different bottles with cork or screw cap), and an abstract level that additionally generalizes across object category (e.g., opening bottles or boxes). We demonstrate that the inferior parietal and occipitotemporal cortex code actions at abstract levels whereas the premotor cortex codes actions at the concrete level only. Hence, occipitotemporal, but not premotor, regions fulfill the necessary criteria for action understanding. This result is compatible with cognitive theories but strongly undermines motor theories of action understanding.

Research paper thumbnail of Decoding Concrete and Abstract Action Representations During Explicit and Implicit Conceptual Processing

Action understanding requires a many-to-one mapping of perceived input onto abstract representati... more Action understanding requires a many-to-one mapping of perceived input onto abstract representations that generalize across concrete features. It is debated whether such abstract action concepts are encoded in ventral premotor cortex (PMv; motor hypothesis) or, alternatively, are represented in lateral occipitotemporal cortex (LOTC; cognitive hypothesis). We used fMRI-based multivoxel pattern analysis to decode observed actions at concrete and abstract, object-independent levels of representation. Participants observed videos of 2 actions involving 2 different objects, using either an explicit or implicit task with respect to conceptual action processing. We decoded concrete action representations by training and testing a classifier to discriminate between actions within each object category. To identify abstract action representations, we trained the classifier to discriminate actions in one object and tested the classifier on actions performed on the other object, and vice versa. Region-of-interest and searchlight analyses revealed decoding in LOTC at both concrete and abstract levels during both tasks, whereas decoding in PMv was restricted to the concrete level during the explicit task. In right inferior parietal cortex, decoding was significant for the abstract level during the explicit task. Our findings are incompatible with the motor hypothesis, but support the cognitive hypothesis of action understanding.

Research paper thumbnail of Objects tell us what action we can expect: dissociating brain areas for retrieval and exploitation of action knowledge during action observation in fMRI

Objects are reminiscent of actions often performed with them: knife and apple remind us on peelin... more Objects are reminiscent of actions often performed with them: knife and apple remind us on peeling the apple or cutting it. Mnemonic representations of object-related actions (action codes) evoked by the sight of an object may constrain and hence facilitate recognition of unrolling actions. The present fMRI study investigated if and how action codes influence brain activation during action observation. The average number of action codes (NAC) of 51 sets of objects was rated by a group of n = 24 participants. In an fMRI study, different volunteers were asked to recognize actions performed with the same objects presented in short videos. To disentangle areas reflecting the storage of action codes from those exploiting them, we showed object-compatible and object-incompatible (pantomime) actions. Areas storing action codes were considered to positively co-vary with NAC in both object-compatible and object-incompatible action; due to its role in tool-related tasks, we here hypothesized left anterior inferior parietal cortex (aIPL). In contrast, areas exploiting action codes were expected to show this correlation only in object-compatible but not incompatible action, as only object-compatible actions match one of the active action codes. For this interaction, we hypothesized ventrolateral premotor cortex (PMv) to join aIPL due to its role in biasing competition in IPL. We found left anterior intraparietal sulcus (IPS) and left posterior middle temporal gyrus (pMTG) to co-vary with NAC. In addition to these areas, action codes increased activity in object-compatible action in bilateral PMv, right IPS, and lateral occipital cortex (LO). Findings suggest that during action observation, the brain derives possible actions from perceived objects, and uses this information to shape action recognition. In particular, the number of expectable actions quantifies the activity level at PMv, IPL, and pMTG, but only PMv reflects their biased competition while observed action unfolds.

Research paper thumbnail of Expertise in action observation: recent neuroimaging findings and future perspectives

Research paper thumbnail of Action Observers Implicitly Expect Actors to Act Goal-Coherently, Even if they Do Not: An fMRI Study

Actions observed in everyday life normally consist of one person performing sequences of goal-dir... more Actions observed in everyday life normally consist of one person performing sequences of goal-directed actions. The present fMRI study tested the hypotheses that observers are influenced by the actor's identity, even when this information is task-irrelevant, and that this information shapes their expectation on subsequent actions of the same actor. Participants watched short video clips of action steps that either pertained to a common action with an overarching goal or not, and were performed by either one or by varying actors (2 3 2 design). Independent of goal coherence, actor coherence elicited activation in dorsolateral and ventromedial frontal cortex, together pointing to a spontaneous attempt to integrate all actions performed by one actor. Interestingly, watching an actor performing unrelated actions elicited additional activation in left inferior frontal gyrus, suggesting a search in semantic memory in an attempt to construct an overarching goal that can reconcile the disparate action steps with a coherent intention. Post-experimental surveys indicate that these processes occur mostly unconsciously. Findings strongly suggest a spontaneous expectation bias toward actorrelated episodes in action observers, and hence to the immense impact of actor information on action observation. Hum Brain Mapp 00:000-000,

Research paper thumbnail of Surprised at All the Entropy: Hippocampal, Caudate and Midbrain Contributions to Learning from Prediction Errors

PloS one, Jan 1, 2012

Influential concepts in neuroscientific research cast the brain a predictive machine that revises... more Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

Research paper thumbnail of The Context–Object–Manipulation Triad: Cross Talk during Action Perception Revealed by fMRI

Journal of Cognitive Neuroscience, Jan 1, 2012

■ To recognize an action, an observer exploits information about the applied manipulation, the in... more ■ To recognize an action, an observer exploits information about the applied manipulation, the involved objects, and the context where the action occurs. Context, object, and manipulation information are hence expected to be tightly coupled in a triadic relationship (the COM triad hereafter). The current fMRI study investigated the hemodynamic signatures of reciprocal modulation in the COM triad. Participants watched short video clips of pantomime actions, that is, actions performed with inappropriate objects, taking place at compatible or incompatible contexts. The usage of pantomime actions enabled the disentanglement of the neural substrates of context-manipulation (CM) and context-object (CO) associations. There were trials in which (1) both manipulation and objects, (2) only manipulation, (3) only objects, or (4) neither manipulation nor objects were com-patible with the context. CM compatibility effects were found in an action-related network comprising ventral premotor cortex, SMA, left anterior intraparietal sulcus, and bilateral occipitotemporal cortex. Conversely, CO compatibility effects were found bilaterally in lateral occipital complex. These effects interacted in subregions of the lateral occipital complex. An overlap of CM and CO effects was observed in the occipito-temporal cortex and the dorsal attention network, that is, superior frontal sulcus/dorsal premotor cortex and superior parietal lobe. Results indicate that contextual information is integrated into the analysis of actions. Manipulation and object information is linked by contextual associations as a function of co-occurrence in specific contexts. Activation of either CM or CO associations shifts attention to either action-or object-related relevant information. ■

Research paper thumbnail of Do we mind other minds when we mind other minds' actions? A functional magnetic resonance imaging study

Human Brain Mapping

Action observation engages higher motor areas, possibly reflecting an internal simulation. Howeve... more Action observation engages higher motor areas, possibly reflecting an internal simulation. However, actions considered odd or unusual were found to trigger additional activity in the so-called theory of mind (ToM) network, pointing to deliberations on the actor's mental states. In this functional magnetic resonance imaging study, the hypothesis was tested that an allocentric perspective on a normal action, and even more so the sight of the actor's face, suffices to evoke ToM activity. Subjects observed short videos of object manipulation filmed from either the egocentric or the allocentric perspective, the latter including the actor's face in half of the trials. On the basis of a regions of interest analysis using ToM coordinates, we found increased neural activity in several regions of the ToM network. First, perceiving actions from an allocentric compared with the egocentric perspective enhanced activity in the left temporoparietal junction (TPJ). Second, the presence of the actor's face enhanced activation in the TPJ bilaterally, the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Finally, the mPFC and PCC showed increased responses when the actor changed with respect to the preceding trial. These findings were further corroborated by zmap findings for the latter two contrasts. Together, findings indicate that observation of normal everyday actions can engage ToM areas and that an allocentric perspective, seeing the actor's face and seeing a face switch, are effective triggers. Hum Brain Mapp 00:000-000,

Research paper thumbnail of Squeezing lemons in the bathroom: contextual information modulates action recognition

NeuroImage, Jan 1, 2011

Most every day actions take place in domestic rooms that are specific for certain classes of acti... more Most every day actions take place in domestic rooms that are specific for certain classes of actions. Contextual information derived from domestic settings may therefore influence the efficiency of action recognition. The present studies investigated whether action recognition is modulated by compatibility of the context an action is embedded in. To this end, subjects watched video clips of actions performed in compatible, incompatible, and neutral contexts. Recognition was significantly slower when actions took place in an incompatible as compared to a compatible or a neutral context (Experiment 1). Functional MRI revealed increased activation for incompatible context in Brodmann Areas (BA) 44, 45, and 47 of the left ventrolateral prefrontal cortex (vlPFC; Experiment 2). Results suggest that contextual information -even when task-irrelevant -informs a high processing level of action analysis. In particular, the functional profiles assigned to these prefrontal regions suggest that contextual information activates associated action representations as a function of (in-)compatibility. Thus, incompatibility effects may reflect the attempt to resolve the conflict between action and context by embedding the presented action step into an overarching action that is again compatible with the provided context.

Research paper thumbnail of Decoding action concepts at different levels of abstraction - an fMRI MVPA study

Background / Purpose: Action concepts are abstractions of concrete action. It is assumed that abs... more Background / Purpose: Action concepts are abstractions of concrete action. It is assumed that abstraction occurs on several distinct levels. For example, “opening a bottle” is an abstraction of concrete instantiations of that action (e.g., “opening a particular wine bottle”). On a higher level of abstraction, “opening” describes the action concept independent of the object class. The neural substrates of abstraction from concrete actions to abstract action concepts are debated: Motor versions of embodied theories claim that action concepts are grounded in the motor system. By contrast, classical cognitive theories propose that action concepts consist of amodal representations in networks distinct from the motor system. Here we used cross-conditional multivoxel pattern analysis (MVPA) to decode observed actions on three levels of abstraction: a concrete level (opening vs. closing a specific bottle), an intermediate level (opening vs. closing across different bottle exemplars), and an...

Research paper thumbnail of Predicting goals in action episodes attenuates BOLD response in inferior frontal and occipitotemporal cortex.pdf

Research paper thumbnail of Objects Mediate Goal Integration in Ventrolateral Prefrontal Cortex during Action Observation

Actions performed by others are mostly not observed in isolation, but embedded in sequences of ac... more Actions performed by others are mostly not observed in isolation, but embedded in sequences of actions tied together by an overarching goal. Therefore, preceding actions can modulate the observer's expectations in relation to the currently perceived action. Ven-trolateral prefrontal cortex (vlPFC), and inferior frontal gyrus (IFG) in particular, is suggested to subserve the integration of episodic as well as semantic information and memory, including action scripts. The present fMRI study investigated if activation in IFG varies with the effort to integrate expected and observed action, even when not required by the task. During an fMRI session, participants were instructed to attend to short videos of single actions and to deliver a judgment about the actor's current goal. We manipulated the strength of goal expectation induced by the preceding action, implementing the parameter "goal-related-ness" between the preceding and the currently observed action. Moreover, since objects point to the probability of certain actions, we also manipulated whether the current and the preceding action shared at least one object or not. We found an interaction between the two factors goal-relatedness and shared object: IFG activation increased the weaker the goal-relatedness between the preceding and the current action was, but only when they shared at least one object. Here, integration of successive action steps was triggered by the reappearing (shared) object but hampered by a weak goal-relatedness between the actually observed manipulation. These findings foster the recently emerging view that IFG is enhanced by goal-related conflicts during action observation.

Research paper thumbnail of Decoding Internally and Externally Driven Movement Plans

During movement planning, brain activity within parietofrontal networks encodes information about... more During movement planning, brain activity within parietofrontal networks encodes information about upcoming actions that can be driven either externally (e.g., by a sensory cue) or internally (i.e., by a choice/decision). Here we used multivariate pattern analysis (MVPA) of fMRI data to distinguish between areas that represent (1) abstract movement plans that generalize across the way in which these were driven, (2) internally driven movement plans, or (3) externally driven movement plans. In a delayed-movement paradigm, human volunteers were asked to plan and execute three types of nonvisually guided right-handed reaching movements toward a central target object: using a precision grip, a power grip, or touching the object without hand preshaping. On separate blocks of trials, movements were either instructed via color cues (Instructed condition), or chosen by the participant (Free-Choice condition). Using ROI-based and whole-brain searchlight-based MVPA, we found abstract representations of planned movements that generalize across the way these movements are selected (internally vs externally driven) in parietal cortex, dorsal premotor cortex, and primary motor cortex contralat-eral to the acting hand. In addition, we revealed representations specific for internally driven movement plans in contralateral ventral premotor cortex, dorsolateral prefrontal cortex, supramarginal gyrus, and in ipsilateral posterior parietotemporal regions, suggesting that these regions are recruited during movement selection. Finally, we observed representations of externally driven movement plans in bilateral supplementary motor cortex and a similar trend in presupplementary motor cortex, suggesting a role in stimulus–response mapping.

Research paper thumbnail of What's she doing in the kitchen? Context helps when actions are hard to recognize

Specific spatial environments are often indicative of where certain actions may take place: In ki... more Specific spatial environments are often indicative of where certain actions may take place: In kitchens we prepare food, and in bathrooms we engage in personal hygiene, but not vice versa. In action recognition, contextual cues may constrain an observer's expectations toward actions that are more strongly associated with a particular context than others. Such cues should become particularly helpful when the action itself is difficult to recognize. However, to date only easily identifiable actions were investigated, and the effects of context on recognition were rather interfering than facilitatory. To test whether context also facilitates action recognition, we measured recognition performance of hardly identifiable actions that took place in compatible, incompatible, and neutral contextual settings. Action information was degraded by pixelizing the area of the object manipulation while the room in which the action took place remained fully visible. We found significantly higher accuracy for actions that took place in compatible compared to incompatible and neutral settings, indicating facilitation. Additionally, action recognition was slower in incompatible settings than in compatible and neutral settings, indicating interference. Together, our findings demonstrate that contextual information is effectively exploited during action observation, in particular when visual information about the action itself is sparse. Differential effects on speed and accuracy suggest that contexts modulate action recognition at different levels of processing. Our findings emphasize the importance of contextual information in comprehensive, ecologically valid models of action recognition.

Research paper thumbnail of Decoding Actions at Different Levels of Abstraction

Brain regions that mediate action understanding must contain representations that are action spec... more Brain regions that mediate action understanding must contain representations that are action specific and at the same time tolerate a wide range of perceptual variance. Whereas progress has been made in understanding such generalization mechanisms in the object domain, the neural mechanisms to conceptualize actions remain unknown. In particular, there is ongoing dissent between motor-centric and cognitive accounts whether premotor cortex or brain regions in closer relation to perceptual systems, i.e., lateral occipitotemporal cortex, contain neural populations with such mapping properties. To date, it is unclear to which degree action-specific representations in these brain regions generalize from concrete action instantiations to abstract action concepts. However, such information would be crucial to differentiate between motor and cognitive theories. Using ROI-based and searchlight-based fMRI multivoxel pattern decoding, we sought brain regions in human cortex that manage the balancing act between specificity and generality. We investigated a concrete level that distinguishes actions based on perceptual features (e.g., opening vs closing a specific bottle), an intermediate level that generalizes across movement kinematics and specific objects involved in the action (e.g., opening different bottles with cork or screw cap), and an abstract level that additionally generalizes across object category (e.g., opening bottles or boxes). We demonstrate that the inferior parietal and occipitotemporal cortex code actions at abstract levels whereas the premotor cortex codes actions at the concrete level only. Hence, occipitotemporal, but not premotor, regions fulfill the necessary criteria for action understanding. This result is compatible with cognitive theories but strongly undermines motor theories of action understanding.

Research paper thumbnail of Decoding Concrete and Abstract Action Representations During Explicit and Implicit Conceptual Processing

Action understanding requires a many-to-one mapping of perceived input onto abstract representati... more Action understanding requires a many-to-one mapping of perceived input onto abstract representations that generalize across concrete features. It is debated whether such abstract action concepts are encoded in ventral premotor cortex (PMv; motor hypothesis) or, alternatively, are represented in lateral occipitotemporal cortex (LOTC; cognitive hypothesis). We used fMRI-based multivoxel pattern analysis to decode observed actions at concrete and abstract, object-independent levels of representation. Participants observed videos of 2 actions involving 2 different objects, using either an explicit or implicit task with respect to conceptual action processing. We decoded concrete action representations by training and testing a classifier to discriminate between actions within each object category. To identify abstract action representations, we trained the classifier to discriminate actions in one object and tested the classifier on actions performed on the other object, and vice versa. Region-of-interest and searchlight analyses revealed decoding in LOTC at both concrete and abstract levels during both tasks, whereas decoding in PMv was restricted to the concrete level during the explicit task. In right inferior parietal cortex, decoding was significant for the abstract level during the explicit task. Our findings are incompatible with the motor hypothesis, but support the cognitive hypothesis of action understanding.

Research paper thumbnail of Objects tell us what action we can expect: dissociating brain areas for retrieval and exploitation of action knowledge during action observation in fMRI

Objects are reminiscent of actions often performed with them: knife and apple remind us on peelin... more Objects are reminiscent of actions often performed with them: knife and apple remind us on peeling the apple or cutting it. Mnemonic representations of object-related actions (action codes) evoked by the sight of an object may constrain and hence facilitate recognition of unrolling actions. The present fMRI study investigated if and how action codes influence brain activation during action observation. The average number of action codes (NAC) of 51 sets of objects was rated by a group of n = 24 participants. In an fMRI study, different volunteers were asked to recognize actions performed with the same objects presented in short videos. To disentangle areas reflecting the storage of action codes from those exploiting them, we showed object-compatible and object-incompatible (pantomime) actions. Areas storing action codes were considered to positively co-vary with NAC in both object-compatible and object-incompatible action; due to its role in tool-related tasks, we here hypothesized left anterior inferior parietal cortex (aIPL). In contrast, areas exploiting action codes were expected to show this correlation only in object-compatible but not incompatible action, as only object-compatible actions match one of the active action codes. For this interaction, we hypothesized ventrolateral premotor cortex (PMv) to join aIPL due to its role in biasing competition in IPL. We found left anterior intraparietal sulcus (IPS) and left posterior middle temporal gyrus (pMTG) to co-vary with NAC. In addition to these areas, action codes increased activity in object-compatible action in bilateral PMv, right IPS, and lateral occipital cortex (LO). Findings suggest that during action observation, the brain derives possible actions from perceived objects, and uses this information to shape action recognition. In particular, the number of expectable actions quantifies the activity level at PMv, IPL, and pMTG, but only PMv reflects their biased competition while observed action unfolds.

Research paper thumbnail of Expertise in action observation: recent neuroimaging findings and future perspectives

Research paper thumbnail of Action Observers Implicitly Expect Actors to Act Goal-Coherently, Even if they Do Not: An fMRI Study

Actions observed in everyday life normally consist of one person performing sequences of goal-dir... more Actions observed in everyday life normally consist of one person performing sequences of goal-directed actions. The present fMRI study tested the hypotheses that observers are influenced by the actor's identity, even when this information is task-irrelevant, and that this information shapes their expectation on subsequent actions of the same actor. Participants watched short video clips of action steps that either pertained to a common action with an overarching goal or not, and were performed by either one or by varying actors (2 3 2 design). Independent of goal coherence, actor coherence elicited activation in dorsolateral and ventromedial frontal cortex, together pointing to a spontaneous attempt to integrate all actions performed by one actor. Interestingly, watching an actor performing unrelated actions elicited additional activation in left inferior frontal gyrus, suggesting a search in semantic memory in an attempt to construct an overarching goal that can reconcile the disparate action steps with a coherent intention. Post-experimental surveys indicate that these processes occur mostly unconsciously. Findings strongly suggest a spontaneous expectation bias toward actorrelated episodes in action observers, and hence to the immense impact of actor information on action observation. Hum Brain Mapp 00:000-000,

Research paper thumbnail of Surprised at All the Entropy: Hippocampal, Caudate and Midbrain Contributions to Learning from Prediction Errors

PloS one, Jan 1, 2012

Influential concepts in neuroscientific research cast the brain a predictive machine that revises... more Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

Research paper thumbnail of The Context–Object–Manipulation Triad: Cross Talk during Action Perception Revealed by fMRI

Journal of Cognitive Neuroscience, Jan 1, 2012

■ To recognize an action, an observer exploits information about the applied manipulation, the in... more ■ To recognize an action, an observer exploits information about the applied manipulation, the involved objects, and the context where the action occurs. Context, object, and manipulation information are hence expected to be tightly coupled in a triadic relationship (the COM triad hereafter). The current fMRI study investigated the hemodynamic signatures of reciprocal modulation in the COM triad. Participants watched short video clips of pantomime actions, that is, actions performed with inappropriate objects, taking place at compatible or incompatible contexts. The usage of pantomime actions enabled the disentanglement of the neural substrates of context-manipulation (CM) and context-object (CO) associations. There were trials in which (1) both manipulation and objects, (2) only manipulation, (3) only objects, or (4) neither manipulation nor objects were com-patible with the context. CM compatibility effects were found in an action-related network comprising ventral premotor cortex, SMA, left anterior intraparietal sulcus, and bilateral occipitotemporal cortex. Conversely, CO compatibility effects were found bilaterally in lateral occipital complex. These effects interacted in subregions of the lateral occipital complex. An overlap of CM and CO effects was observed in the occipito-temporal cortex and the dorsal attention network, that is, superior frontal sulcus/dorsal premotor cortex and superior parietal lobe. Results indicate that contextual information is integrated into the analysis of actions. Manipulation and object information is linked by contextual associations as a function of co-occurrence in specific contexts. Activation of either CM or CO associations shifts attention to either action-or object-related relevant information. ■

Research paper thumbnail of Do we mind other minds when we mind other minds' actions? A functional magnetic resonance imaging study

Human Brain Mapping

Action observation engages higher motor areas, possibly reflecting an internal simulation. Howeve... more Action observation engages higher motor areas, possibly reflecting an internal simulation. However, actions considered odd or unusual were found to trigger additional activity in the so-called theory of mind (ToM) network, pointing to deliberations on the actor's mental states. In this functional magnetic resonance imaging study, the hypothesis was tested that an allocentric perspective on a normal action, and even more so the sight of the actor's face, suffices to evoke ToM activity. Subjects observed short videos of object manipulation filmed from either the egocentric or the allocentric perspective, the latter including the actor's face in half of the trials. On the basis of a regions of interest analysis using ToM coordinates, we found increased neural activity in several regions of the ToM network. First, perceiving actions from an allocentric compared with the egocentric perspective enhanced activity in the left temporoparietal junction (TPJ). Second, the presence of the actor's face enhanced activation in the TPJ bilaterally, the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Finally, the mPFC and PCC showed increased responses when the actor changed with respect to the preceding trial. These findings were further corroborated by zmap findings for the latter two contrasts. Together, findings indicate that observation of normal everyday actions can engage ToM areas and that an allocentric perspective, seeing the actor's face and seeing a face switch, are effective triggers. Hum Brain Mapp 00:000-000,

Research paper thumbnail of Squeezing lemons in the bathroom: contextual information modulates action recognition

NeuroImage, Jan 1, 2011

Most every day actions take place in domestic rooms that are specific for certain classes of acti... more Most every day actions take place in domestic rooms that are specific for certain classes of actions. Contextual information derived from domestic settings may therefore influence the efficiency of action recognition. The present studies investigated whether action recognition is modulated by compatibility of the context an action is embedded in. To this end, subjects watched video clips of actions performed in compatible, incompatible, and neutral contexts. Recognition was significantly slower when actions took place in an incompatible as compared to a compatible or a neutral context (Experiment 1). Functional MRI revealed increased activation for incompatible context in Brodmann Areas (BA) 44, 45, and 47 of the left ventrolateral prefrontal cortex (vlPFC; Experiment 2). Results suggest that contextual information -even when task-irrelevant -informs a high processing level of action analysis. In particular, the functional profiles assigned to these prefrontal regions suggest that contextual information activates associated action representations as a function of (in-)compatibility. Thus, incompatibility effects may reflect the attempt to resolve the conflict between action and context by embedding the presented action step into an overarching action that is again compatible with the provided context.