Oiwi Parker Jones | University of Oxford (original) (raw)
Papers by Oiwi Parker Jones
Neuropsychologia, 2015
We used fMRI in 35 healthy participants to investigate how two neighbouring subregions in the lat... more We used fMRI in 35 healthy participants to investigate how two neighbouring subregions in the lateral anterior temporal lobe (LATL) contribute to semantic matching and object naming. Four different levels of processing were considered: (A) recognition of the object concepts; (B) search for semantic associations related to object stimuli; (C) retrieval of semantic concepts of interest; and (D) retrieval of stimulus specific concepts as required for naming. During semantic association matching on picture stimuli or heard object names, we found that activation in both subregions was higher when the objects were semantically related (mug-kettle) than unrelated (car-teapot). This is consistent with both LATL sub-regions playing a role in (C), the successful retrieval of amodal semantic concepts. In addition, one subregion was more activated for object naming than matching semantically related objects, consistent with (D), the retrieval of a specific concept for naming. We discuss the implications of these novel findings for cognitive models of semantic processing and left anterior temporal lobe function.
Loanwords in the World's Languages: A Comparative Handbook, 2009
Frontiers in Human Neuroscience, 2014
This fMRI study used a single, multi-factorial, within-subjects design to dissociate multiple lin... more This fMRI study used a single, multi-factorial, within-subjects design to dissociate multiple linguistic and non-linguistic processing areas that are all involved in repeating back heard words. The study compared: (1) auditory to visual inputs; (2) phonological to non-phonological inputs; (3) semantic to non-semantic inputs; and (4) speech production to finger-press responses. The stimuli included words (semantic and phonological inputs), pseudowords (phonological input), pictures and sounds of animals or objects (semantic input), and colored patterns and hums (non-semantic and non-phonological). The speech production tasks involved auditory repetition, reading, and naming while the finger press tasks involved one-back matching. The results from the main effects and interactions were compared to predictions from a previously reported functional anatomical model of language based on a meta-analysis of many different neuroimaging experiments. Although many findings from the current experiment replicated many of those predicted, our within-subject design also revealed novel results by providing sufficient anatomical precision to dissect several different regions within the anterior insula, pars orbitalis, anterior cingulate, SMA, and cerebellum. For example, we found one part of the pars orbitalis was involved in phonological processing and another in semantic processing. We also dissociated four different types of phonological effects in the left superior temporal sulcus (STS), left putamen, left ventral premotor cortex, and left pars orbitalis. Our findings challenge some of the commonly-held opinions on the functional anatomy of language, and resolve some previously conflicting findings about specific brain regions-and our experimental design reveals details of the word repetition process that are not well captured by current models.
Journal of Neuroscience, 2015
1. Individual subjects show a preference for using either somatosensory feedback or auditory feed... more 1. Individual subjects show a preference for using either somatosensory feedback or auditory feedback during picture naming
2. Somatosensory and auditory feedback are more engaged by reading aloud than by picture naming, even when the motor outputs are controlled
NeuroImage, 2019
This fMRI study of 24 healthy human participants investigated whether any part of the auditory co... more This fMRI study of 24 healthy human participants investigated whether any part of the auditory cortex was more responsive to self-generated speech sounds compared to hearing another person speak. The results demonstrate a double dissociation in two different parts of the auditory cortex. In the right posterior superior temporal sulcus (RpSTS), activation was higher during speech production than listening to auditory stimuli, whereas in bilateral superior temporal gyri (STG), activation was higher for listening to auditory stimuli than during speech production. In the second part of the study, we investigated the function of the identified regions, by examining how activation changed across a range of listening and speech production tasks that systematically varied the demands on acoustic, semantic, phonological and orthographic processing. In RpSTS, activation during auditory conditions was higher in the absence of semantic cues, plausibly indicating increased attention to the spectral-temporal features of auditory inputs. In addition, RpSTS responded in the absence of any auditory inputs when participants were making one-back matching decisions on visually presented pseudowords. After analysing the influence of visual, phonological, semantic and orthographic processing, we propose that RpSTS (i) contributes to short term memory of speech sounds as well as (ii) spectral-temporal processing of auditory input and (iii) may play a role in integrating auditory expectations with auditory input. In contrast, activation in bilateral STG was sensitive to acoustic input and did not respond in the absence of auditory input. The special role of RpSTS during speech production therefore merits further investigation if we are to fully understand the neural mechanisms supporting speech production during speech acquisition, adult life, hearing loss and after brain injury.
Brain, 2018
Acquired language disorders after stroke are strongly associated with left hemisphere damage. Whe... more Acquired language disorders after stroke are strongly associated with left hemisphere damage. When language difficulties are observed in the context of right hemisphere strokes, patients are usually considered to have atypical functional anatomy. By systematically integrating behavioural and lesion data from brain damaged patients with functional MRI data from neurologically normal participants , we investigated when and why right hemisphere strokes cause language disorders. Experiment 1 studied right-handed patients with unilateral strokes that damaged the right (n = 109) or left (n = 369) hemispheres. The most frequently impaired language task was: auditory sentence-to-picture matching after right hemisphere strokes; and spoken picture description after left hemisphere strokes. For those with auditory sentence-to-picture matching impairments after right hemisphere strokes, the majority (n = 9) had normal performance on tests of perceptual (visual or auditory) and linguistic (semantic, phonological or syntactic) processing. Experiment 2 found that these nine patients had significantly more damage to dorsal parts of the superior longitudinal fasciculus and the right inferior frontal sulcus compared to 75 other patients who also had right hemisphere strokes but were not impaired on the auditory sentence-to-picture matching task. Damage to these right hemisphere regions caused long-term speech comprehension difficulties in 67% of patients. Experiments 3 and 4 used functional MRI in two groups of 25 neurologically normal individuals to show that within the regions identified by Experiment 2, the right inferior frontal sulcus was normally activated by (i) auditory sentence-to-picture matching; and (ii) one-back matching when the demands on linguistic and non-linguistic working memory were high. Together, these experiments demonstrate that the right inferior frontal cortex contributes to linguistic and non-linguistic working memory capacity (executive function) that is needed for normal speech comprehension. Our results link previously unrelated literatures on the role of the right inferior frontal cortex in executive processing and the role of executive processing in sentence comprehension; which in turn helps to explain why right inferior frontal activity has previously been reported to increase during recovery of language function after left hemisphere stroke. The clinical relevance of our findings is that the detrimental effect of right hemisphere strokes on language is (i) much greater than expected; (ii) frequently observed after damage to the right inferior frontal sulcus; (iii) task dependent; (iv) different to the type of impairments observed after left hemisphere strokes; and (v) can result in long-lasting deficits that are (vi) not the consequence of atypical language lateralization.
NeuroImage: Clinical, 2019
The occurrence of wide-scale neuroplasticity in the injured human brain raises hopes for biomarke... more The occurrence of wide-scale neuroplasticity in the injured human brain raises hopes for biomarkers to guide personalised treatment. At the individual level, functional reorganisation has proven challenging to quantify using current techniques that are optimised for population-based analyses. In this cross-sectional study, we acquired functional MRI scans in 44 patients (22 men, 22 women, mean age: 39.4 ± 14 years) with a language-dominant hemisphere brain tumour prior to surgery and 23 healthy volunteers (11 men, 12 women, mean age: 36.3 ± 10.9 years) during performance of a verbal fluency task. We applied a recently developed approach to characterise the normal range of functional connectivity patterns during task performance in healthy controls. Next, we statistically quantified differences from the normal in individual patients and evaluated factors driving these differences. We show that the functional connectivity of brain regions involved in language fluency identifies "fingerprints" of brain plasticity in individual patients, not detected using standard task-evoked analyses. In contrast to healthy controls, patients with a tumour in their language dominant hemisphere showed highly variable fingerprints that uniquely distinguished individuals. Atypical fingerprints were influenced by tumour grade and tumour location relative to the typical fluency-activated network. Our findings show how alterations in brain networks can be visualised and statistically quantified from connectivity fingerprints in individual brains. We propose that connectivity fingerprints offer a statistical metric of individually-specific network organisation through which behaviourally-relevant adaptations could be formally quantified and monitored across individuals, treatments and time.
Cortex, 2018
Phrenology was a nineteenth century endeavour to link personality traits with scalp morphology, w... more Phrenology was a nineteenth century endeavour to link personality traits with scalp morphology, which has been both influential and fiercely criticised, not least because of the assumption that scalp morphology can be informative of underlying brain function. Here we test the idea empirically rather than dismissing it out of hand. Whereas nineteenth century phrenologists had access to coarse measurement tools (digital technology referring then to fingers), we were able to reexamine phrenology using 21st century methods and thousands of subjects drawn from the largest neuroimaging study to date. High-quality structural MRI was used to quantify local scalp curvature. The resulting curvature statistics were compared against lifestyle measures acquired from the same cohort of subjects, being careful to match a subset of lifestyle measures to phrenological ideas of brain organisation, in an effort to evoke the character of Victorian times. The results represent the most rigorous evaluation of phrenological claims to date.
Journal of the International Phonetic Association, 2018
NeurIPS 2018 Interpretability and Robustness for Audio, Speech and Language Workshop, 2018
In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs ... more In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops. These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation. However, transliterating between older and newer texts is a laborious task when performed manually. We introduce two related methods to help solve this transliteration problem automatically, given that there were not enough data to train an end-to-end deep learning model. One method is implemented, end-to-end, using finite state transducers (FSTs). The other is a hybrid deep learning approach which approximately composes an FST with a recurrent neural network (RNN). We find that the hybrid approach outperforms the end-to-end FST by partitioning the original problem into one part that can be modelled by hand, using an FST, and into another part, which is easily solved by an RNN trained on the available data.
Frontiers in Human Neuroscience, 2014
The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor tran... more The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction, referred to as Spt (Sylvian-parietal-temporal region), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in 8 patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e. conduction aphasia). All 8 patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the 8 patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing.
Frontiers in Human Neuroscience, 2013
Previous studies have investigated orthographic-to-phonological mapping during reading by compari... more Previous studies have investigated orthographic-to-phonological mapping during reading by comparing brain activation for (1) reading words to object naming, or (2) reading pseudowords (e.g., “phume”) to words (e.g., “plume”). Here we combined both approaches to provide new insights into the underlying neural mechanisms. In fMRI data from 25 healthy adult readers, we first identified activation that was greater for reading words and pseudowords relative to picture and color naming. The most significant effect was observed in the left putamen, extending to both anterior and posterior borders. Second, consistent with previous studies, we show that both the anterior and posterior putamen are involved in articulating speech with greater activation during our overt speech production tasks (reading, repetition, object naming, and color naming) than silent one-back-matching on the same stimuli. Third, we compared putamen activation for words versus pseudowords during overt reading and auditory repetition. This revealed that the anterior putamen was most activated by reading pseudowords, whereas the posterior putamen was most activated by words irrespective of whether the task was reading words or auditory word repetition. The pseudoword effect in the anterior putamen is consistent with prior studies that associated this region with the initiation of novel sequences of movements. In contrast, the heightened word response in the posterior putamen is consistent with other studies that associated this region with “memory guided movement.” Our results illustrate how the functional dissociation between the anterior and posterior putamen supports sublexical and lexical processing during reading.
During speech production, auditory processing of self-generated speech is used to adjust subseque... more During speech production, auditory processing of self-generated speech is used to adjust subsequent articulations. The current study investigated how the proposed auditory-motor interactions are manifest at the neural level in native and non-native speakers of English who were overtly naming pictures of objects and reading their written names. Data were acquired with functional magnetic resonance imaging and analyzed with dynamic causal modeling. We found that (1) higher activity in articulatory regions caused activity in auditory regions to decrease (i.e., auditory suppression), and (2) higher activity in auditory regions caused activity in articulatory regions to increase (i.e., auditory feedback). In addition, we were able to demonstrate that (3) speaking in a non-native language involves more auditory feedback and less auditory suppression than speaking in a native language. The difference between native and non-native speakers was further supported by finding that, within non-native speakers, there was less auditory feedback for those with better verbal fluency. Consequently, the networks of more fluent non-native speakers looked more like those of native speakers. Together, these findings provide a foundation on which to explore auditory-motor interactions during speech production in other human populations, particularly those with speech difficulties.
Unlike most languages that are written using a single script, Japanese uses multiple scripts incl... more Unlike most languages that are written using a single script, Japanese uses multiple scripts including morphographic Kanji and syllabographic Hiragana and Katakana. Here, we used functional magnetic resonance imaging with dynamic causal modeling to investigate competing theories regarding the neural processing of Kanji and Hiragana during a visual lexical decision task. First, a bilateral model investigated interhemispheric connectivity between ventral occipito-temporal (vOT) cortex and Broca's area ("pars opercularis"). We found that Kanji significantly increased the connection strength from right-to-left vOT. This is interpreted in terms of increased right vOT activity for visually complex Kanji being integrated into the left (i.e. language dominant) hemisphere. Secondly, we used a unilateral left hemisphere model to test whether Kanji and Hiragana rely preferentially on ventral and dorsal paths, respectively, that is, they have different intrahemispheric functional connectivity profiles. Consistent with this hypothesis, we found that Kanji increased connectivity within the ventral path (V1 ↔ vOT ↔ Broca's area), and that Hiragana increased connectivity within the dorsal path (V1 ↔ supramarginal gyrus ↔ Broca's area). Overall, the results illustrate how the differential processing demands of Kanji and Hiragana influence both inter- and intrahemispheric interactions.
We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investig... more We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investigate whether the efficiency of word processing in the non-native language (lexical efficiency) and the number of non-native languages spoken (2+ versus 1) were related to local differences in the brain structure of bilingual and multilingual speakers. We dissociate two different correlates for non-native language processing. Firstly, multilinguals who spoke 2 or more non-native languages had higher grey matter density in the right posterior supramarginal gyrus compared to bilinguals who only spoke one non-native language. This is interpreted in relation to previous studies that have shown that grey matter density in this region is related to the number of words learnt in bilinguals relative to monolinguals and in monolingual adolescents with high versus low vocabulary. Our second result was that, in bilinguals, grey matter density in the left pars opercularis was positively related to lexical efficiency in second language use, as measured by the speed and accuracy of lexical decisions and the number of words produced in a timed verbal fluency task. Grey matter in the same region was also negatively related to the age at which the second language was acquired. This is interpreted in terms of previous findings that associated the left pars opercularis with phonetic expertise in the native language.
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read... more Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language.
Oxford Working Papers in Linguistics, Philology & Phonetics 12, 2009
The passive provides arguably the most famous dataset in Polynesian linguistics. While most explo... more The passive provides arguably the most famous dataset in Polynesian linguistics. While most explorations of the passive over the past 40 years have been framed in terms of Hale’s (1968) phonological and morphological analyses, we instead frame the Hawaiian passive in terms of a word-based analysis. As Parker Jones (2008) has done for New Zealand Māori, we model passivization in Hawaiian as a mapping from active verbs to passive verbs in a feed-forward neural network. Unlike Māori, the Hawaiian passive exhibits productivity for multiple categories with thematic consonants. By scrutinizing the model, we conclude that passivization in Hawaiian is exemplar-driven.
Oxford Working Papers in Linguistics, Philology & Phonetics 12, 2009
This paper reports on a word-recognition experiment which was designed to test the question of ho... more This paper reports on a word-recognition experiment which was designed to test the question of how lexical items are stored in the mind. Focusing on speech rate, we investigated the possibility of a priming effect for it—using semantic priming in order to ensure lexical access in subjects. What we found was an unexpected and highly significant correlation between target rate and response time. While this is an interesting result with important implications, it also means that the experiment was ultimately inconclusive with regard to the question of whether or not speech rate is lexical. We are continuing to explore the question in ongoing work.
Neuropsychologia, 2015
We used fMRI in 35 healthy participants to investigate how two neighbouring subregions in the lat... more We used fMRI in 35 healthy participants to investigate how two neighbouring subregions in the lateral anterior temporal lobe (LATL) contribute to semantic matching and object naming. Four different levels of processing were considered: (A) recognition of the object concepts; (B) search for semantic associations related to object stimuli; (C) retrieval of semantic concepts of interest; and (D) retrieval of stimulus specific concepts as required for naming. During semantic association matching on picture stimuli or heard object names, we found that activation in both subregions was higher when the objects were semantically related (mug-kettle) than unrelated (car-teapot). This is consistent with both LATL sub-regions playing a role in (C), the successful retrieval of amodal semantic concepts. In addition, one subregion was more activated for object naming than matching semantically related objects, consistent with (D), the retrieval of a specific concept for naming. We discuss the implications of these novel findings for cognitive models of semantic processing and left anterior temporal lobe function.
Loanwords in the World's Languages: A Comparative Handbook, 2009
Frontiers in Human Neuroscience, 2014
This fMRI study used a single, multi-factorial, within-subjects design to dissociate multiple lin... more This fMRI study used a single, multi-factorial, within-subjects design to dissociate multiple linguistic and non-linguistic processing areas that are all involved in repeating back heard words. The study compared: (1) auditory to visual inputs; (2) phonological to non-phonological inputs; (3) semantic to non-semantic inputs; and (4) speech production to finger-press responses. The stimuli included words (semantic and phonological inputs), pseudowords (phonological input), pictures and sounds of animals or objects (semantic input), and colored patterns and hums (non-semantic and non-phonological). The speech production tasks involved auditory repetition, reading, and naming while the finger press tasks involved one-back matching. The results from the main effects and interactions were compared to predictions from a previously reported functional anatomical model of language based on a meta-analysis of many different neuroimaging experiments. Although many findings from the current experiment replicated many of those predicted, our within-subject design also revealed novel results by providing sufficient anatomical precision to dissect several different regions within the anterior insula, pars orbitalis, anterior cingulate, SMA, and cerebellum. For example, we found one part of the pars orbitalis was involved in phonological processing and another in semantic processing. We also dissociated four different types of phonological effects in the left superior temporal sulcus (STS), left putamen, left ventral premotor cortex, and left pars orbitalis. Our findings challenge some of the commonly-held opinions on the functional anatomy of language, and resolve some previously conflicting findings about specific brain regions-and our experimental design reveals details of the word repetition process that are not well captured by current models.
Journal of Neuroscience, 2015
1. Individual subjects show a preference for using either somatosensory feedback or auditory feed... more 1. Individual subjects show a preference for using either somatosensory feedback or auditory feedback during picture naming
2. Somatosensory and auditory feedback are more engaged by reading aloud than by picture naming, even when the motor outputs are controlled
NeuroImage, 2019
This fMRI study of 24 healthy human participants investigated whether any part of the auditory co... more This fMRI study of 24 healthy human participants investigated whether any part of the auditory cortex was more responsive to self-generated speech sounds compared to hearing another person speak. The results demonstrate a double dissociation in two different parts of the auditory cortex. In the right posterior superior temporal sulcus (RpSTS), activation was higher during speech production than listening to auditory stimuli, whereas in bilateral superior temporal gyri (STG), activation was higher for listening to auditory stimuli than during speech production. In the second part of the study, we investigated the function of the identified regions, by examining how activation changed across a range of listening and speech production tasks that systematically varied the demands on acoustic, semantic, phonological and orthographic processing. In RpSTS, activation during auditory conditions was higher in the absence of semantic cues, plausibly indicating increased attention to the spectral-temporal features of auditory inputs. In addition, RpSTS responded in the absence of any auditory inputs when participants were making one-back matching decisions on visually presented pseudowords. After analysing the influence of visual, phonological, semantic and orthographic processing, we propose that RpSTS (i) contributes to short term memory of speech sounds as well as (ii) spectral-temporal processing of auditory input and (iii) may play a role in integrating auditory expectations with auditory input. In contrast, activation in bilateral STG was sensitive to acoustic input and did not respond in the absence of auditory input. The special role of RpSTS during speech production therefore merits further investigation if we are to fully understand the neural mechanisms supporting speech production during speech acquisition, adult life, hearing loss and after brain injury.
Brain, 2018
Acquired language disorders after stroke are strongly associated with left hemisphere damage. Whe... more Acquired language disorders after stroke are strongly associated with left hemisphere damage. When language difficulties are observed in the context of right hemisphere strokes, patients are usually considered to have atypical functional anatomy. By systematically integrating behavioural and lesion data from brain damaged patients with functional MRI data from neurologically normal participants , we investigated when and why right hemisphere strokes cause language disorders. Experiment 1 studied right-handed patients with unilateral strokes that damaged the right (n = 109) or left (n = 369) hemispheres. The most frequently impaired language task was: auditory sentence-to-picture matching after right hemisphere strokes; and spoken picture description after left hemisphere strokes. For those with auditory sentence-to-picture matching impairments after right hemisphere strokes, the majority (n = 9) had normal performance on tests of perceptual (visual or auditory) and linguistic (semantic, phonological or syntactic) processing. Experiment 2 found that these nine patients had significantly more damage to dorsal parts of the superior longitudinal fasciculus and the right inferior frontal sulcus compared to 75 other patients who also had right hemisphere strokes but were not impaired on the auditory sentence-to-picture matching task. Damage to these right hemisphere regions caused long-term speech comprehension difficulties in 67% of patients. Experiments 3 and 4 used functional MRI in two groups of 25 neurologically normal individuals to show that within the regions identified by Experiment 2, the right inferior frontal sulcus was normally activated by (i) auditory sentence-to-picture matching; and (ii) one-back matching when the demands on linguistic and non-linguistic working memory were high. Together, these experiments demonstrate that the right inferior frontal cortex contributes to linguistic and non-linguistic working memory capacity (executive function) that is needed for normal speech comprehension. Our results link previously unrelated literatures on the role of the right inferior frontal cortex in executive processing and the role of executive processing in sentence comprehension; which in turn helps to explain why right inferior frontal activity has previously been reported to increase during recovery of language function after left hemisphere stroke. The clinical relevance of our findings is that the detrimental effect of right hemisphere strokes on language is (i) much greater than expected; (ii) frequently observed after damage to the right inferior frontal sulcus; (iii) task dependent; (iv) different to the type of impairments observed after left hemisphere strokes; and (v) can result in long-lasting deficits that are (vi) not the consequence of atypical language lateralization.
NeuroImage: Clinical, 2019
The occurrence of wide-scale neuroplasticity in the injured human brain raises hopes for biomarke... more The occurrence of wide-scale neuroplasticity in the injured human brain raises hopes for biomarkers to guide personalised treatment. At the individual level, functional reorganisation has proven challenging to quantify using current techniques that are optimised for population-based analyses. In this cross-sectional study, we acquired functional MRI scans in 44 patients (22 men, 22 women, mean age: 39.4 ± 14 years) with a language-dominant hemisphere brain tumour prior to surgery and 23 healthy volunteers (11 men, 12 women, mean age: 36.3 ± 10.9 years) during performance of a verbal fluency task. We applied a recently developed approach to characterise the normal range of functional connectivity patterns during task performance in healthy controls. Next, we statistically quantified differences from the normal in individual patients and evaluated factors driving these differences. We show that the functional connectivity of brain regions involved in language fluency identifies "fingerprints" of brain plasticity in individual patients, not detected using standard task-evoked analyses. In contrast to healthy controls, patients with a tumour in their language dominant hemisphere showed highly variable fingerprints that uniquely distinguished individuals. Atypical fingerprints were influenced by tumour grade and tumour location relative to the typical fluency-activated network. Our findings show how alterations in brain networks can be visualised and statistically quantified from connectivity fingerprints in individual brains. We propose that connectivity fingerprints offer a statistical metric of individually-specific network organisation through which behaviourally-relevant adaptations could be formally quantified and monitored across individuals, treatments and time.
Cortex, 2018
Phrenology was a nineteenth century endeavour to link personality traits with scalp morphology, w... more Phrenology was a nineteenth century endeavour to link personality traits with scalp morphology, which has been both influential and fiercely criticised, not least because of the assumption that scalp morphology can be informative of underlying brain function. Here we test the idea empirically rather than dismissing it out of hand. Whereas nineteenth century phrenologists had access to coarse measurement tools (digital technology referring then to fingers), we were able to reexamine phrenology using 21st century methods and thousands of subjects drawn from the largest neuroimaging study to date. High-quality structural MRI was used to quantify local scalp curvature. The resulting curvature statistics were compared against lifestyle measures acquired from the same cohort of subjects, being careful to match a subset of lifestyle measures to phrenological ideas of brain organisation, in an effort to evoke the character of Victorian times. The results represent the most rigorous evaluation of phrenological claims to date.
Journal of the International Phonetic Association, 2018
NeurIPS 2018 Interpretability and Robustness for Audio, Speech and Language Workshop, 2018
In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs ... more In contrast to the older writing system of the 19th century, modern Hawaiian orthography employs characters for long vowels and glottal stops. These extra characters account for about one-third of the phonemes in Hawaiian, so including them makes a big difference to reading comprehension and pronunciation. However, transliterating between older and newer texts is a laborious task when performed manually. We introduce two related methods to help solve this transliteration problem automatically, given that there were not enough data to train an end-to-end deep learning model. One method is implemented, end-to-end, using finite state transducers (FSTs). The other is a hybrid deep learning approach which approximately composes an FST with a recurrent neural network (RNN). We find that the hybrid approach outperforms the end-to-end FST by partitioning the original problem into one part that can be modelled by hand, using an FST, and into another part, which is easily solved by an RNN trained on the available data.
Frontiers in Human Neuroscience, 2014
The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor tran... more The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction, referred to as Spt (Sylvian-parietal-temporal region), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in 8 patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e. conduction aphasia). All 8 patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the 8 patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing.
Frontiers in Human Neuroscience, 2013
Previous studies have investigated orthographic-to-phonological mapping during reading by compari... more Previous studies have investigated orthographic-to-phonological mapping during reading by comparing brain activation for (1) reading words to object naming, or (2) reading pseudowords (e.g., “phume”) to words (e.g., “plume”). Here we combined both approaches to provide new insights into the underlying neural mechanisms. In fMRI data from 25 healthy adult readers, we first identified activation that was greater for reading words and pseudowords relative to picture and color naming. The most significant effect was observed in the left putamen, extending to both anterior and posterior borders. Second, consistent with previous studies, we show that both the anterior and posterior putamen are involved in articulating speech with greater activation during our overt speech production tasks (reading, repetition, object naming, and color naming) than silent one-back-matching on the same stimuli. Third, we compared putamen activation for words versus pseudowords during overt reading and auditory repetition. This revealed that the anterior putamen was most activated by reading pseudowords, whereas the posterior putamen was most activated by words irrespective of whether the task was reading words or auditory word repetition. The pseudoword effect in the anterior putamen is consistent with prior studies that associated this region with the initiation of novel sequences of movements. In contrast, the heightened word response in the posterior putamen is consistent with other studies that associated this region with “memory guided movement.” Our results illustrate how the functional dissociation between the anterior and posterior putamen supports sublexical and lexical processing during reading.
During speech production, auditory processing of self-generated speech is used to adjust subseque... more During speech production, auditory processing of self-generated speech is used to adjust subsequent articulations. The current study investigated how the proposed auditory-motor interactions are manifest at the neural level in native and non-native speakers of English who were overtly naming pictures of objects and reading their written names. Data were acquired with functional magnetic resonance imaging and analyzed with dynamic causal modeling. We found that (1) higher activity in articulatory regions caused activity in auditory regions to decrease (i.e., auditory suppression), and (2) higher activity in auditory regions caused activity in articulatory regions to increase (i.e., auditory feedback). In addition, we were able to demonstrate that (3) speaking in a non-native language involves more auditory feedback and less auditory suppression than speaking in a native language. The difference between native and non-native speakers was further supported by finding that, within non-native speakers, there was less auditory feedback for those with better verbal fluency. Consequently, the networks of more fluent non-native speakers looked more like those of native speakers. Together, these findings provide a foundation on which to explore auditory-motor interactions during speech production in other human populations, particularly those with speech difficulties.
Unlike most languages that are written using a single script, Japanese uses multiple scripts incl... more Unlike most languages that are written using a single script, Japanese uses multiple scripts including morphographic Kanji and syllabographic Hiragana and Katakana. Here, we used functional magnetic resonance imaging with dynamic causal modeling to investigate competing theories regarding the neural processing of Kanji and Hiragana during a visual lexical decision task. First, a bilateral model investigated interhemispheric connectivity between ventral occipito-temporal (vOT) cortex and Broca's area ("pars opercularis"). We found that Kanji significantly increased the connection strength from right-to-left vOT. This is interpreted in terms of increased right vOT activity for visually complex Kanji being integrated into the left (i.e. language dominant) hemisphere. Secondly, we used a unilateral left hemisphere model to test whether Kanji and Hiragana rely preferentially on ventral and dorsal paths, respectively, that is, they have different intrahemispheric functional connectivity profiles. Consistent with this hypothesis, we found that Kanji increased connectivity within the ventral path (V1 ↔ vOT ↔ Broca's area), and that Hiragana increased connectivity within the dorsal path (V1 ↔ supramarginal gyrus ↔ Broca's area). Overall, the results illustrate how the differential processing demands of Kanji and Hiragana influence both inter- and intrahemispheric interactions.
We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investig... more We used structural magnetic resonance imaging (MRI) and voxel based morphometry (VBM) to investigate whether the efficiency of word processing in the non-native language (lexical efficiency) and the number of non-native languages spoken (2+ versus 1) were related to local differences in the brain structure of bilingual and multilingual speakers. We dissociate two different correlates for non-native language processing. Firstly, multilinguals who spoke 2 or more non-native languages had higher grey matter density in the right posterior supramarginal gyrus compared to bilinguals who only spoke one non-native language. This is interpreted in relation to previous studies that have shown that grey matter density in this region is related to the number of words learnt in bilinguals relative to monolinguals and in monolingual adolescents with high versus low vocabulary. Our second result was that, in bilinguals, grey matter density in the left pars opercularis was positively related to lexical efficiency in second language use, as measured by the speed and accuracy of lexical decisions and the number of words produced in a timed verbal fluency task. Grey matter in the same region was also negatively related to the age at which the second language was acquired. This is interpreted in terms of previous findings that associated the left pars opercularis with phonetic expertise in the native language.
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read... more Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language.
Oxford Working Papers in Linguistics, Philology & Phonetics 12, 2009
The passive provides arguably the most famous dataset in Polynesian linguistics. While most explo... more The passive provides arguably the most famous dataset in Polynesian linguistics. While most explorations of the passive over the past 40 years have been framed in terms of Hale’s (1968) phonological and morphological analyses, we instead frame the Hawaiian passive in terms of a word-based analysis. As Parker Jones (2008) has done for New Zealand Māori, we model passivization in Hawaiian as a mapping from active verbs to passive verbs in a feed-forward neural network. Unlike Māori, the Hawaiian passive exhibits productivity for multiple categories with thematic consonants. By scrutinizing the model, we conclude that passivization in Hawaiian is exemplar-driven.
Oxford Working Papers in Linguistics, Philology & Phonetics 12, 2009
This paper reports on a word-recognition experiment which was designed to test the question of ho... more This paper reports on a word-recognition experiment which was designed to test the question of how lexical items are stored in the mind. Focusing on speech rate, we investigated the possibility of a priming effect for it—using semantic priming in order to ensure lexical access in subjects. What we found was an unexpected and highly significant correlation between target rate and response time. While this is an interesting result with important implications, it also means that the experiment was ultimately inconclusive with regard to the question of whether or not speech rate is lexical. We are continuing to explore the question in ongoing work.
Even in response to apparently simple tasks such as hand moving, human brain activity shows remar... more Even in response to apparently simple tasks such as hand moving, human brain activity shows remarkable inter-subject variability. Presumably, this variability reflects genuine behavioural or functional variability. Recently, spatial variability of resting-state features in fMRI -specifically connectivity -has been shown to explain (spatial) taskresponse variability. Such a link, however, is still missing for M/EEG data and its spectrally rich structure. At the same time, it has recently been shown that task responses in M/EEG can be well represented using transient spectral events bursting at fast time scales. Here, we
Loanwords in the World's Languages: A Comparative Handbook. (Martin Haspelmath and Uri Tadmor, editors), 2009
arXiv, 2019
Generative latent-variable models are emerging as promising tools in robotics and reinforcement l... more Generative latent-variable models are emerging as promising tools in robotics and reinforcement learning. Yet, even though tasks in these domains typically involve distinct objects, most state-of-the-art generative models do not explicitly capture the compositional nature of visual scenes. Two recent exceptions, MONet and IODINE, decompose scenes into objects in an unsupervised fashion. Their underlying generative processes, however, do not account for component interactions. Hence, neither of them allows for principled sampling of novel scenes. Here we present GENESIS, the first object-centric generative model of 3D visual scenes capable of both decomposing and generating scenes by capturing relationships between scene components. GENESIS parameterises a spatial GMM over images which is decoded from a set of object-centric latent variables that are either inferred sequentially in an amortised fashion or sampled from an autoregressive prior. We train GENESIS on several publicly available datasets and evaluate its performance on scene generation, decomposition, and semi-supervised learning.