Computational Text Analysis: A More Comprehensive Approach to Determine Readability of Reading Materials (original) (raw)

Text readability: its impact on reading comprehension and reading time

Journal of Education and Learning (EduLearn), 2024

Recently, the readability of texts has become the focus of reading research because it is believed to have implications for reading comprehension, which is of utmost importance in the field of English as a foreign language (EFL), particularly in the teaching, learning and assessment of reading comprehension. Unfortunately, the influence of text readability on reading comprehension (and reading time) has not been well studied in the EFL context. Most text readability studies are conducted in medical contexts, but these studies are often limited in predicting readability scores for sample texts. To address this gap, the current study aimed to evaluate the influence of text readability levels (based on the Flesch-Kincaid grade level (FKGL)) on students’ reading comprehension and reading time. Data were collected through reading test and analyzed using SPSS version 22. The Friedman test revealed that the distribution of students' reading comprehension score (X2=197.532, p=0.000) and reading time (X2=215.323, p=0.000) are different in each text, suggesting that the readability of texts has a significant influence on both. This study contributed to the practices of reading instruction and assessment. Limitations and suggestions for further research are briefly discussed.

Readability of Texts: Human Evaluation Versus Computer Index

mcser.org

This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.

TEXT DIFFICULTY: A COMPARISON OF READABILITY FORMULAE AND EXPERTS’ JUDGMENT

Teachers of English, librarians, researchers have been interested in finding the right text for the right reader for many years. In teaching Second Language (L2), text writers often try to fulfil the demand by simplifying the texts for the readers. The emerged term " readability " can be defined as " the ease of reading words and sentences " (Hargis, et al. 1998). The aim of this research was to compare the ways to find the right text for the right reader: traditional readability formulae (Flesch Reading Ease, Flesch-Kincaid Grade Level), Coh-Metrix Second Language (L2) Reading Index, which is a readability formula based on psycholinguistic and cognitive models of reading', and teachers' estimation of grade levels by using leveled texts in a web site. In order to do this, a selection of texts from a corpus of intuitively simplified texts was used (N30). Coh-Metrix Readability levels, Flesch Reading Ease, and Flesch-Kincaid Grade Levels of the texts were calculated via Coh-Metrix Web Tool. Three teachers of English were asked to decide the levels of the texts. When the relationship between Coh-metrix Readability Level, traditional formulae and the texts levels in the website was analysed via SPSS, it was found that there was weak negative correlation between Flesch-Kincaid Grade Level and the texts levels in the website (-,39). Additionally, there was weak negative correlation between the texts levels in the website and Flesch Reading Ease scores (-,41). However, there was moderate negative correlation between Coh-metrix Readability levels and the texts levels in the website (-,63), where Teacher1 and Coh-metrix Readability levels had very strong positive correlation (,95). It was identified that readability formulae can help L2 teachers when they select texts for their students for teaching and assessment purposes.

Text Readability: A Snapshot

SALTeL Journal (Southeast Asia Language Teaching and Learning)

Selecting suitable reading materials are taxing and challenging for many English instructors. Text readability analysis can be used to automate the process of reading material selection and also the assessment of reading ability for language learners. Readability formulas have been broadly used in determining text difficulty based on learners’ grade level. Based on mathematical calculations, a readability formula examines certain features of a text in order to provide best rough approximations as an indication of difficulty. This paper reflects some aspects and issues of readability analysis.

Research Into Readability: Paradigms and Possibilities

2013

Readability refers to the process of matching the reader and the text. Research into readability was developed from the 1920s up to the middle of the 1990s. The developments in this research area appeared to stop at that point, however, and very little research has been reported recently. In general, research into readability focused on the development of practical methods for matching reading materials to the reading abilities of student and adult readers. These efforts centred on the development of easily applied readability formulae for teachers and librarians to use. More recent readability research has involved a period of consolidation in which researchers sought to learn more about how the formulae worked and how to improve them. In this paper, our aims are to describe:

Readability level of reading texts in life: elementary and issues for today

2017

The purpose of the study are to find out the readability level of 14 reading texts from Life: Elementary, the readability level of 6 reading texts from Issues for Today and to see whether the readability level of the reading texts in Life: Elementary and the readability level of the reading texts in Issues for Today are graded or not using Fry Readability Graph. The steps of the research are: (1) the researcher found the average number of syllables per 100-words both from Life: Elementary reading texts and and Issues for Today reading texts which define as “x” on the graph. (2) Then, the researcher found the average number of sentences per 100-word both from Life: Elementary reading texts and and Issues for Today reading texts which define as “y” on the graph. (3) After getting the number of the “x” and the “y”, the researcher determined the readability level from each text by marking the dot on the Fry Graph. (4) Then, the researcher put the result on the table of data analysis. In...

The linguistic assumptions underlying readability formulae: a critique.

This article critically examines some of the linguistic assumption underlying the readability formulae that are commonly used in schools and by publishing houses. Do these assumptions really enable readability formulae to offer a sound, scientific way of evaluating the difficulty of texts? This paper examines the linguistic criteria that form the basis for readability scores and argues that the criteria commonly used in readability formulae do not constitute a satisfactory basis for assessing reading difficulty

Assessing Text Readability Using Cognitively Based Indices

TESOL Quarterly, 2008

Many programs designed to compute the readability of texts are narrowly based on surface‐level linguistic features and take too little account of the processes which a reader brings to the text. This study is an exploratory examination of the use of Coh‐Metrix, a computational tool that measures cohesion and text difficulty at various levels of language, discourse, and conceptual analysis. It is suggested that Coh‐Metrix provides an improved means of measuring English text readability for second language (L2) readers, not least because three Coh‐Metrix variables, one employing lexical coreferentiality, one measuring syntactic sentence similarity, and one measuring word frequency, have correlates in psycholinguistic theory. The current study draws on the validation exercise conducted by Greenfield (1999) with Japanese EFL students, which partially replicated Bormuth's (1971) study with American students. It finds that Coh‐Metrix, with its inclusion of the three variables, yields ...

Readability analysis of teaching materials of English courses class 2 and 6 elementary school

LADU: Journal of Languages and Education

Background: Textbooks are one of the learning media used by elementary school students. Textbooks function to support teachers in the learning process and become a source of knowledge for students in following the learning process. Books as teaching materials are books that contain various subject matter produced by an author based on the applicable curriculum and serve as guidelines for teachers and students in the learning process. Purpose: This study focuses on the study to describe the level of readability of teaching materials adapted to children's cognition. Design and methods: The method used is a qualitative method with data analysis of readability text on webfx . The data used as a source in this study is in the form of text contained in teaching materials for English lessons for grades 2 and 6.. Results: Based on the results of the readability text on webfx , only a book with the title Knowing Kinds of Animals Using the Game "Card Hunting" is intended for gr...