Syntactic structure and artificial grammar learning: The learnability of embedded hierarchical structures (original) (raw)
Related papers
In an artificial grammar learning study, Lai & Poletiek (2011) found that human participants could learn a center-embedded recursive grammar only if the input during training was presented in a staged fashion. Previous studies on artificial grammar learning, with randomly ordered input, failed to demonstrate learning of such a center-embedded structure. In the account proposed here, the staged input effect is explained by a fine-tuned match between the statistical characteristics of the incrementally organized input and the development of human cognitive learning over time, from low level, linear associative, to hierarchical processing of long distance dependencies. Interestingly, staged input seems to be effective only for learning hierarchi-cal structures, and unhelpful for learning linear grammars.
Cognitive science, 2018
It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, ; Newport, ). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right-branching and center-embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (N = 100). In Experiment 3 and 4, we used a more complex center-embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowle...
Plos One, 2020
In this paper we probe the interaction between sequential and hierarchical learning by investigating implicit learning in a group of school-aged children. We administered a serial reaction time task, in the form of a modified Simon Task in which the stimuli were organised following the rules of two distinct artificial grammars, specifically Lindenmayer systems: the Fibonacci grammar (Fib) and the Skip grammar (a modification of the former). The choice of grammars is determined by the goal of this study, which is to investigate how sensitivity to structure emerges in the course of exposure to an input whose surface transitional properties (by hypothesis) bootstrap structure. The studies conducted to date have been mainly designed to investigate low-level superficial regularities, learnable in purely statistical terms, whereas hierarchical learning has not been effectively investigated yet. The possibility to directly pinpoint the interplay between sequential and hierarchical learning is instead at the core of our study: we presented children with two grammars, Fib and Skip, which share the same transitional regularities, thus providing identical opportunities for sequential learning, while crucially differing in their hierarchical structure. More particularly, there are specific points in the sequence (k-points), which, despite giving rise to the same transitional regularities in the two grammars, support hierarchical reconstruction in Fib but not in Skip. In our protocol, children were simply asked to perform a traditional Simon Task, and they were completely unaware of the real purposes of the task. Results indicate that sequential learning occurred in both grammars, as shown by the decrease in reaction times throughout the task, while differences were found in the sensitivity to k-points: these, we contend, play a role in hierarchical reconstruction in Fib, whereas they are devoid of structural significance in Skip. More particularly, we found that children were faster in correspondence to k-points in sequences produced by Fib, thus providing an entirely new kind of evidence for the hypothesis that implicit learning involves an early activation of strategies of hierarchical reconstruction, based on a straightforward interplay with the statistically-based computation of transitional regularities on the sequences of symbols.
Semantics boosts syntax in artificial grammar learning tasks with recursion.
2012
Abstract 1. Center-embedded recursion (CER) in natural language is exemplified by sentences such as “The malt that the rat ate lay in the house.” Parsing center-embedded structures is in the focus of attention because this could be one of the cognitive capacities that make humans distinct from all other animals. The ability to parse CER is usually tested by means of artificial grammar learning (AGL) tasks, during which participants have to infer the rule from a set of artificial sentences.
Complexity, Training Paradigm Design, and the Contribution of Memory Subsystems to Grammar Learning
Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success.
2013
A total of 78 adult participants were asked to read a sample of strings generated by a finite state grammar and, immediately after reading each string, to mark the natural segmentation positions with a slash bar. They repeated the same task after a phase of familiarization with the material, which consisted, depending on the group involved, of learning items by rote, performing a shortterm matching task, or searching for the rules of the grammar. Participants formed the same number of cognitive units before and after the training phase, thus indicating that they did not tend to form increasingly large units. However, the number of different units reliably decreased, whatever the task that participants had performed during familiarization. This result indicates that segmentation was increasingly consistent with the structure of the grammar. A theoretical account of this phenomenon, based on ubiquitous principles of associative memory and learning, is proposed. This account is support...
What artificial grammar learning reveals about the neurobiology of syntax
Brain and Language, 2012
In this paper we examine the neurobiological correlates of syntax, the processing of structured sequences, by comparing FMRI results on artificial and natural language syntax. We discuss these and similar findings in the context of formal language and computability theory. We used a simple right-linear unification grammar in an implicit artificial grammar learning paradigm in 32 healthy Dutch university students (natural language FMRI data were already acquired for these participants). We predicted that artificial syntax processing would engage the left inferior frontal region (BA 44/45) and that this activation would overlap with syntax-related variability observed in the natural language experiment. The main findings of this study show that the left inferior frontal region centered on BA 44/45 is active during artificial syntax processing of well-formed (grammatical) sequence independent of local subsequence familiarity. The same region is engaged to a greater extent when a syntactic violation is present and structural unification becomes difficult or impossible. The effects related to artificial syntax in the left inferior frontal region (BA 44/45) were essentially identical when we masked these with activity related to natural syntax in the same subjects. Finally, the medial temporal lobe was deactivated during this operation, consistent with the view that implicit processing does not rely on declarative memory mechanisms that engage the medial temporal lobe. In the context of recent FMRI findings, we raise the question whether Broca's region (or subregions) is specifically related to syntactic movement operations or the processing of hierarchically nested non-adjacent dependencies in the discussion section. We conclude that this is not the case. Instead, we argue that the left inferior frontal region is a generic on-line sequence processor that unifies information from various sources in an incremental and recursive manner, independent of whether there are any processing requirements related to syntactic movement or hierarchically nested structures. In addition, we argue that the Chomsky hierarchy is not directly relevant for neurobiological systems.
The neural basis of recursion of complex syntactic hierarchy
Biolinguistics, 2011
Language is a faculty specific to humans. It is characterized by hierarchical, recursive structures. The processing of hierarchically complex sentences is known to recruit Broca's area. Comparisons across brain imaging studies investigating similar hierarchical structures in different domains revealed that complex hierarchical structures that mimic those of natural languages mainly activate Broca's area, that is, left Brodmann area (BA) 44/45, whereas hierarchically structured mathematical formulae, moreover, strongly recruit more anteriorly located region BA 47. The present results call for a model of the prefrontal cortex assuming two systems of processing complex hierarchy: one system determined by cognitive control for which the posterior-toanterior gradient applies active in the case of processing hierarchically structured mathematical formulae, and one system which is confined to the posterior parts of the prefrontal cortex processing complex syntactic hierarchies in language efficiently.
2006
Sensitivity to distributional characteristics of sequential linguistic and nonlinguistic stimuli, have been shown to play a role in learning the underlying structure of these stimuli. A growing body of experimental and computational research with (artificial) grammars suggests that learners are sensitive to various distributional characteristics of their environment (Kuhl, 2004; Onnis, Monaghan, Richmond & Chater, 2005; Rohde & Plaut, 1999). We propose that, at a higher level, statistical characteristics of the full sample of stimuli on which learning is based, also affects learning. We provide a statistical model that accounts for such an effect, and experimental data with the Artificial Grammar Learning (AGL) methodology, showing that learners also are sensitive to distributional characteristics of a full sample of exemplars.
The formation of structurally relevant units in artificial grammar learning
The Quarterly Journal of Experimental Psychology Section A, 2002
A total of 78 adult participants were asked to read a sample of strings generated by a finite state grammar and, immediately after reading each string, to mark the natural segmentation positions with a slash bar. They repeated the same task after a phase of familiarization with the material, which consisted, depending on the group involved, of learning items by rote, performing a shortterm matching task, or searching for the rules of the grammar. Participants formed the same number of cognitive units before and after the training phase, thus indicating that they did not tend to form increasingly large units. However, the number of different units reliably decreased, whatever the task that participants had performed during familiarization. This result indicates that segmentation was increasingly consistent with the structure of the grammar. A theoretical account of this phenomenon, based on ubiquitous principles of associative memory and learning, is proposed. This account is supported by the ability of a computer model implementing those principles, PARSER, to reproduce the observed pattern of results. The implications of this study for developmental theories aimed at accounting for how children become able to parse sensory input into physically and linguistically relevant units are discussed.