Comparing and validating methods of reading instruction using behavioural and neural findings in an artificial orthography - PubMed (original) (raw)

Comparing and validating methods of reading instruction using behavioural and neural findings in an artificial orthography

J S H Taylor et al. J Exp Psychol Gen. 2017 Jun.

Abstract

There is strong scientific consensus that emphasizing print-to-sound relationships is critical when learning to read alphabetic languages. Nevertheless, reading instruction varies across English-speaking countries, from intensive phonic training to multicuing environments that teach sound- and meaning-based strategies. We sought to understand the behavioral and neural consequences of these differences in relative emphasis. We taught 24 English-speaking adults to read 2 sets of 24 novel words (e.g., /buv/, /sig/), written in 2 different unfamiliar orthographies. Following pretraining on oral vocabulary, participants learned to read the novel words over 8 days. Training in 1 language was biased toward print-to-sound mappings while training in the other language was biased toward print-to-meaning mappings. Results showed striking benefits of print-sound training on reading aloud, generalization, and comprehension of single words. Univariate analyses of fMRI data collected at the end of training showed that print-meaning relative to print-sound relative training increased neural effort in dorsal pathway regions involved in reading aloud. Conversely, activity in ventral pathway brain regions involved in reading comprehension was no different following print-meaning versus print-sound training. Multivariate analyses validated our artificial language approach, showing high similarity between the spatial distribution of fMRI activity during artificial and English word reading. Our results suggest that early literacy education should focus on the systematicities present in print-to-sound relationships in alphabetic languages, rather than teaching meaning-based strategies, in order to enhance both reading aloud and comprehension of written words. (PsycINFO Database Record

(c) 2017 APA, all rights reserved).

PubMed Disclaimer

Figures

Figure 1

Figure 1

Examples of stimuli used and illustration of how the learning focus manipulation was implemented. Note that this represents the experience of one participant as the assignment of orthography to spoken word set, noun set, and learning focus was counterbalanced across participants, as detailed in Appendix B. See the online article for the color version of this figure.

Figure 2

Figure 2

Overview of procedures, task details, key questions, data sets, and results for each part of the experiment. Column 1 gives an overview of each training procedure. Column 2 gives further details of the behavioral and MRI protocols. In rows B and E, which show the trial format and timing during MRI scans, dotted lines indicate correspondence between stimulus presentation and scan onset. Black outlined boxes show what participants were viewing, and what they were hearing, thinking, or saying is shown above each box. Column 3 delineates the key questions addressed by each part of the experiment, column 4 shows where the results can be found, and in column 5 ticks and crosses indicate whether each prediction was confirmed by the data. See the online article for the color version of this figure.

Figure 3

Figure 3

Accuracy and RTs in tasks that involved mapping between orthography and phonology for the O–P and O–S focus languages on each day and each session of training. All error bars in this and subsequent figures use standard error appropriate for within-participant designs (Loftus & Masson, 1994). Error bars are shown only for the first session on each day because only these data points were statistically compared. See the online article for the color version of this figure.

Figure 4

Figure 4

Accuracy and RTs in tasks that involved mapping between orthography and semantics for the O–P and O–S focus languages on each day and each session of training. Error bars are shown only for the first session on each day because only these data points were statistically compared. See the online article for the color version of this figure.

Figure 5

Figure 5

Replotting of data shown in the top left graphs of Figures 3 and 4 to illustrate how having completed relatively more O–P training sessions benefits saying the meanings (left panel), whereas having completed relatively more O–S training sessions does not benefit reading aloud (right panel). Graphs show the proportion of items correct in saying the meaning (left panel) and in reading aloud (right panel) the first eight times (after the initial scanning session) participants completed these tasks for the O–P and O–S focus languages. Joined up points indicate that sessions were completed on the same day, whereas separated points indicate that sessions were completed on different days. The information in the table that is connected to the graphs by gray lozenges shows how, when the number of times saying the meaning is equated, participants have received relatively more O–P training for the O–P than the O–S focus language (left panel). Conversely, when the number of times reading aloud is equated, participants have received relatively more O–S training for the O–S than the O–P focus language (right panel). See the online article for the color version of this figure.

Figure 6

Figure 6

Accuracy and RTs in test tasks conducted at the end of behavioral training. (A) reading aloud and saying the meanings of trained items, (B) reading aloud trained and untrained items, (C) old-new decision. See the online article for the color version of this figure.

Figure 7

Figure 7

Brain regions that were more active during English pseudoword than word reading (yellow), or word than pseudoword reading (green). Left and right hemisphere slices show whole-brain activations at p < .001 voxelwise uncorrected and p < .05 FWE cluster-corrected for 20 participants.

Figure 8

Figure 8

Brain regions active when learning pronunciations (OP learning) and meanings (OS learning) of artificial orthographies in MRI Scan 1, prior to behavioral training. Left and right hemisphere slices show whole-brain activations at p < .001 voxelwise uncorrected and _p_ < .05 FWE cluster-corrected for 18 participants. Panel A shows the simple effect of see-hear > hear-only activity for OP (blue) and OS (red) learning, and the overlap between these (purple). Panel B shows the results of an ANOVA that included the factors learning type (OP vs. OS) and trial type (see-hear vs. hear-only). Slices show the main effect of learning type (pink: OS > OP, light blue: OP > OS) and the interaction between learning type and trial type that was driven by greater activation for see-hear relative to hear-only trials for OS than OP learning (red). Note that no brain regions showed more activation for OP than OS learning. Plots show activation for hear-only (HO) and see-hear (SH) trials for OP and OS learning at representative peak voxels that showed a main effect of learning type (pink boxes) or an interaction between learning and trial type (red box).

Figure 9

Figure 9

Brain regions showing differential activity when recalling pronunciations (OP recall) versus meanings (OS recall) of artificial orthographies, in MRI Scan 1, prior to behavioral training. Left hemisphere slices show the results of paired t tests between these two tasks (pink: OS > OP, light blue: OP > OS), collapsed across see-think (ST) and see-speak (SS) trials. Note that no brain regions showed greater activity when recalling meanings than pronunciations. Whole-brain activations are presented at p < .001 voxelwise uncorrected and p < .05 FWE cluster-corrected for 18 participants. Plots show activation for see-think and see-speak trials for OP and OS recall at representative peak voxels that showed greater activity for recalling pronunciations (OP) than meanings (OS).

Figure 10

Figure 10

Brain activity during reading aloud in MRI Scan 2 at the end of training. Slices show main effects from an ANOVA that included the factors lexicality (trained vs. untrained) and training focus (OP vs. OS), which was conducted on activation during see-think trials, for items that were read aloud correctly. Whole-brain activations are presented at p < .001 voxelwise uncorrected and _p_ < .05 FWE cluster-corrected for 20 participants. (A) Slices show main effect of lexicality, green = trained > untrained, yellow = untrained > trained. (B) Slices show main effect of training focus, red = OS > OP focus. Note that no brain regions showed more activation for the OP than the OS focus language. Plots for A and B show see-think activation for trained (dark bars) and untrained items (light bars) for the OP (blue bars) and the OS (red bars) focus languages, at representative peak voxels that showed greater activity for trained than untrained items (green box), untrained than trained items (yellow box), or the OS relative to the OP focus language (red box).

Comment in

Similar articles

Cited by

References

    1. Ashburner J., & Friston K. (1997). Multimodal image coregistration and partitioning—A unified framework. NeuroImage, 6, 209–217. 10.1006/nimg.1997.0290 - DOI - PubMed
    1. Ashburner J., & Friston K. J. (2005). Unified segmentation. NeuroImage, 26, 839–851. 10.1016/j.neuroimage.2005.02.018 - DOI - PubMed
    1. Association of Teachers and Lecturers (2016). Phonics: Position statement. Retrieved from https://www.atl.org.uk/policy-and-campaigns/policy-posts/phonics
    1. Balota D. A., Yap M. J., Cortese M. J., Hutchison K. A., Kessler B., Loftis B., et al. Treiman R. (2007). The English lexicon project. Behavior Research Methods, 39, 445–459. 10.3758/BF03193014 - DOI - PubMed
    1. Ben-Shachar M., Dougherty R. F., Deutsch G. K., & Wandell B. A. (2011). The development of cortical sensitivity to visual word forms. Journal of Cognitive Neuroscience, 23, 2387–2399. 10.1162/jocn.2011.21615 - DOI - PMC - PubMed

MeSH terms

LinkOut - more resources