Readability of Texts: Human Evaluation Versus Computer Index (original) (raw)

TEXT DIFFICULTY: A COMPARISON OF READABILITY FORMULAE AND EXPERTS’ JUDGMENT

Teachers of English, librarians, researchers have been interested in finding the right text for the right reader for many years. In teaching Second Language (L2), text writers often try to fulfil the demand by simplifying the texts for the readers. The emerged term " readability " can be defined as " the ease of reading words and sentences " (Hargis, et al. 1998). The aim of this research was to compare the ways to find the right text for the right reader: traditional readability formulae (Flesch Reading Ease, Flesch-Kincaid Grade Level), Coh-Metrix Second Language (L2) Reading Index, which is a readability formula based on psycholinguistic and cognitive models of reading', and teachers' estimation of grade levels by using leveled texts in a web site. In order to do this, a selection of texts from a corpus of intuitively simplified texts was used (N30). Coh-Metrix Readability levels, Flesch Reading Ease, and Flesch-Kincaid Grade Levels of the texts were calculated via Coh-Metrix Web Tool. Three teachers of English were asked to decide the levels of the texts. When the relationship between Coh-metrix Readability Level, traditional formulae and the texts levels in the website was analysed via SPSS, it was found that there was weak negative correlation between Flesch-Kincaid Grade Level and the texts levels in the website (-,39). Additionally, there was weak negative correlation between the texts levels in the website and Flesch Reading Ease scores (-,41). However, there was moderate negative correlation between Coh-metrix Readability levels and the texts levels in the website (-,63), where Teacher1 and Coh-metrix Readability levels had very strong positive correlation (,95). It was identified that readability formulae can help L2 teachers when they select texts for their students for teaching and assessment purposes.

FORMALISING TEXT DIFFICULTY WITHIN THE EFL CONTEXT (Talk) Readability indices for the assessment of textbooks: a feasibility study in the context of EFL (Paper)

Over Readability indexes have been widely used in order to measure textual difficulty. They can be really useful for the automatic classification of texts, especially within the language teaching discipline. Among other applications, they allow for the previous determination of the difficulty level of texts without even the need of reading them through. The aim of this investigation is twofold: first, examining the degree of accuracy of the six most commonly used readability indexes: Flesch Reading Ease, Flesch-Kincaid Grade Level, Gunning Fog, Automated Readability Index, SMOG, and Coleman-Liau; and second, by means of the data obtained, trying to come up with a new optimised measure.

Text difficulty vs text readability: Students� voices

EduLite: Journal of English Education, Literature and Culture, 2022

Reading is an essential skill to be mastered, especially by university students and it is the lecturers� responsibility to train their students to develop their reading skill. In order to do so, lecturers will have to develop materials or choose materials from existing books to be used in the teaching learning process. The text difficulty must be appropriate to the students� English proficiency level. This study was aimed at finding the students� perceptions of the difficulty of the texts used in the reading class and the corresponding text readability. This study utilized the survey design and the content analysis involving 141 second semester students of the English Language Education Department of a state university. Questionnaires were used to get data on students� perceptions of the text difficulty and the text readability was analysed with the help of https://readabilityformulas.com. The data on students� perceptions were analysed using percentage while the data on the text re...

The Readability of EFL Texts: Teacher’s and Students’ Perspectives

The ease of comprehending a text—or readability—should be estimated by the teachers before they present it to the learners in the teaching of reading; otherwise, the text may be either too easy or too frustrating to read, decreasing the learners’ interest in the text and demotivating them to read further. To predict whether the text is of the right level of difficulty for the learners, the teacher may use his or her own subjective judgment, especially if he or she has ample experience in teaching reading and knows the learners’ reading ability very well. There has been mixed evidence about the accuracy of the teacher’s judgment in estimating readability, and the present study provides an additional piece of empirical evidence to this heap. This study aimed to find out whether the subjective judgment of the teacher correlated with the learners’ perspective in terms of the difficulty level of English texts. A lecturer in the English Department of Unesa ranked five texts from the easiest to the hardest, then asked her students (N = 41) to do the same procedure. Pearson Product Moment was used to compute the relationship between these two, resulting in negative, very high correlation (r = -.98). This finding demonstrates that the texts estimated to be harder to read turned out to be easier for the students, and vice versa. Some possible reasons for this phenomenon were explained, and it was recommended that broader spectrum of factors be considered when using subjective judgment to predict readability so that the results could be more accurate.

Text readability: its impact on reading comprehension and reading time

Journal of Education and Learning (EduLearn), 2024

Recently, the readability of texts has become the focus of reading research because it is believed to have implications for reading comprehension, which is of utmost importance in the field of English as a foreign language (EFL), particularly in the teaching, learning and assessment of reading comprehension. Unfortunately, the influence of text readability on reading comprehension (and reading time) has not been well studied in the EFL context. Most text readability studies are conducted in medical contexts, but these studies are often limited in predicting readability scores for sample texts. To address this gap, the current study aimed to evaluate the influence of text readability levels (based on the Flesch-Kincaid grade level (FKGL)) on students’ reading comprehension and reading time. Data were collected through reading test and analyzed using SPSS version 22. The Friedman test revealed that the distribution of students' reading comprehension score (X2=197.532, p=0.000) and reading time (X2=215.323, p=0.000) are different in each text, suggesting that the readability of texts has a significant influence on both. This study contributed to the practices of reading instruction and assessment. Limitations and suggestions for further research are briefly discussed.

Readability indices for the assessment of textbooks: a feasibility study in the context of EFL

Pascual Cantos Gómez, Ángela Almela Sánchez Lafuente, 2019

Readability indices have been widely used in order to measure textual difficulty. They can be useful for the automatic classification of texts, especially in language teaching. Among other applications, they allow for the previous determination of the difficulty level of texts without the need of reading them through. The aim of this research is twofold: first, to examine the degree of accuracy of the six most commonly used readability indices, and second, to present a new optimized measure. The main problem is that these readability indices may offer disparity, and this is precisely what has motivated our attempt to unite their potential. A discriminant analysis of all the variables under examination has enabled the creation of a much more precise model, improving the previous best results by 15%. Furthermore, errors and disparities in the difficulty level of the analyzed texts have been detected.

Text Readability: A Snapshot

SALTeL Journal (Southeast Asia Language Teaching and Learning)

Selecting suitable reading materials are taxing and challenging for many English instructors. Text readability analysis can be used to automate the process of reading material selection and also the assessment of reading ability for language learners. Readability formulas have been broadly used in determining text difficulty based on learners’ grade level. Based on mathematical calculations, a readability formula examines certain features of a text in order to provide best rough approximations as an indication of difficulty. This paper reflects some aspects and issues of readability analysis.

Reconstructing Readability: Recent Developments and Recommendations in the Analysis of Text Difficulty

Educational Psychology Review, 2012

Largely due to technological advances, methods for analyzing readability have increased significantly in recent years. While past researchers designed hundreds of formulas to estimate the difficulty of texts for readers, controversy has surrounded their use for decades, with criticism stemming largely from their application in creating new texts as well as their utilization of surface-level indicators as proxies for complex cognitive processes that take place when reading a text. This review focuses on examining developments in the field of readability during the past two decades with the goal of informing both current and future research and providing recommendations for present use. The fields of education, linguistics, cognitive science, psychology, discourse processing, and computer science have all made recent strides in developing new methods for predicting the difficulty of texts for various populations. However, there is a need for further development of these methods if they are to become widely available. Keywords Readability. Text difficulty. Reading. Text analysis A century of reading research paralleled a century of research into what makes one text more or less difficult to read and comprehend than another. Some estimate that by the 1980s, over 200 readability formulas had already been developed (DuBay 2004), and since the 1980s, the area has exploded in fields like discourse processing and computer science. The question that remains in the minds of educators and researchers alike is "what is the best way of determining the difficulty of a particular text?" Several reviews and summaries are available of the older, more traditional methods of assessing readability (e.g., Bormuth 1966; DuBay 2004; Klare 1974), and most researchers in the field discuss these classic methods by way of introduction to their own research. Controversy, however, has surrounded these older formulas, and new methods are constantly being developed and tested. The purposes of this review were to (a) examine recent developments in the field of