What Can Readability Measures Really Tell Us About Text Complexity? (original) (raw)
Related papers
Evaluation of readability and content of texts on autism spectrum disorder
Eskişehir Türk Dünyası Uygulama ve Araştırma Merkezi Halk Sağlığı Dergisi
Informative texts on the websites may make positive contributions to patient-physician communication and patients' compliance. The readability and comprehensibility of the information resources on the Internet is as important as the content, accuracy, and reliability. Access to accurate and understandable resources for individuals who want to learn about Autism Spectrum Disorder (ASD) will play an important role in the management of ASD. In our study, it was aimed to evaluate the contents and readability of information texts presented on Turkish websites about ASD. A total of 400 websites were evaluated in Google search using the keywords "autism, autism spectrum disorder, autistic disorder, pervasive developmental disorder". The average readability level was analyzed using the Ateşman and Bezirci-Yılmaz readability formulas. The text contents were divided into two groups as "websites prepared by healthcare professionals" and "websites prepared by non-health professionals" and compared. Forty-three websites were eligible for evaluation. The readability level of the websites is "difficult" according to the Ateşman formula; According to the Bezirci-Yılmaz formula, it was found to be "undergraduate level". The percentage of content of all evaluated websites (n=43) was found to be 65.12±22.71. The content percentage of the websites prepared by health professionals was 81.18±19.32, and the content percentage of websites prepared by non-healthcare professionals was 42.00±3.94 (p=0.001). Access to health information on the Internet has a critical value for individuals with chronic diseases and their family. Early diagnosis of children with ASD and access to early intensive intervention have an important place in the prognosis of the disorder. The readability and comprehensibility of the texts on websites, which are the first source of reference for most families, may contribute to the management.
Online Readability and Text Complexity Analysis with TextEvaluator
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, 2015
We have developed the TextEvaluator system for providing text complexity and Common Core-aligned readability information. Detailed text complexity information is provided by eight component scores, presented in such a way as to aid in the user's understanding of the overall readability metric, which is provided as a holistic score on a scale of 100 to 2000. The user may select a targeted US grade level and receive additional analysis relative to it. This and other capabilities are accessible via a feature-rich front-end, located at http://texteval-pilot.ets.org/TextEvaluator/.
Proceedings of the Workshop on Automatic Text Simplification - Methods and Applications in the Multilingual Society (ATS-MA 2014), 2014
In the state of the art, there are scarce resources available to support development and evaluation of automatic text simplification (TS) systems for specific target populations. These comprise parallel corpora consisting of texts in their original form and in a form that is more accessible for different categories of target reader, including neurotypical second language learners and young readers. In this paper, we investigate the potential to exploit resources developed for such readers to support the development of a text simplification system for use by people with autistic spectrum disorders (ASD). We analysed four corpora in terms of nineteen linguistic features which pose obstacles to reading comprehension for people with ASD. The results indicate that the Britannica TS parallel corpus (aimed at young readers) and the Weekly Reader TS parallel corpus (aimed at second language learners) may be suitable for training a TS system to assist people with ASD. Two sets of classification experiments intended to discriminate between original and simplified texts according to the nineteen features lent further support for those findings.
Text readability and intuitive simplification: A comparison of readability formulas
Reading in a Foreign Language, 2011
Texts are routinely simplified for language learners with authors relying on a variety of approaches and materials to assist them in making the texts more comprehensible. Readability measures are one such tool that authors can use when evaluating text comprehensibility. This study compares the Coh-Metrix Second Language (L2) Reading Index, a readability formula based on psycholinguistic and cognitive models of reading, to traditional readability formulas on a large corpus of texts intuitively simplified for language learners. The goal of this study is to determine which formula best classifies text level (advanced, intermediate, beginner) with the prediction that text classification relates to the formulas’ capacity to measure text comprehensibility. The results demonstrate that the Coh-Metrix L2 Reading Index performs significantly better than traditional readability formulas, suggesting that the variables used in this index are more closely aligned to the intuitive text processing employed by authors when simplifying texts.
Six Good Predictors of Autistic Text Comprehension
This paper presents our investigation of the ability of 33 readability indices to account for the reading comprehension difficulty posed by texts for people with autism. The evaluation by autistic readers of 16 text passages is described, a process which led to the production of the first text collection for which readability has been evaluated by people with autism. We present the findings of a study to determine which of the 33 indices can successfully discriminate between the difficulty levels of the text passages, as determined by our reading experiment involving autistic participants. The discriminatory power of the indices is further assessed through their application to the FIRST corpus which consists of 25 texts presented in their original form and in a manually simplified form (50 texts in total), produced specifically for readers with autism.
Readability assessment for text simplification
We describe a readability assessment ap-proach to support the process of text simplifi-cation for poor literacy readers. Given an in-put text, the goal is to predict its readability level, which corresponds to the literacy level that is expected from the target reader: rudi-mentary, basic or advanced. We complement features traditionally used for readability as-sessment with a number of new features, and experiment with alternative ways to model this problem using machine learning methods, namely classification, regression and ranking. The best resulting model is embedded in an authoring tool for Text Simplification.
Readability of Texts: Human Evaluation Versus Computer Index
mcser.org
This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.
TEXT DIFFICULTY: A COMPARISON OF READABILITY FORMULAE AND EXPERTS’ JUDGMENT
Teachers of English, librarians, researchers have been interested in finding the right text for the right reader for many years. In teaching Second Language (L2), text writers often try to fulfil the demand by simplifying the texts for the readers. The emerged term " readability " can be defined as " the ease of reading words and sentences " (Hargis, et al. 1998). The aim of this research was to compare the ways to find the right text for the right reader: traditional readability formulae (Flesch Reading Ease, Flesch-Kincaid Grade Level), Coh-Metrix Second Language (L2) Reading Index, which is a readability formula based on psycholinguistic and cognitive models of reading', and teachers' estimation of grade levels by using leveled texts in a web site. In order to do this, a selection of texts from a corpus of intuitively simplified texts was used (N30). Coh-Metrix Readability levels, Flesch Reading Ease, and Flesch-Kincaid Grade Levels of the texts were calculated via Coh-Metrix Web Tool. Three teachers of English were asked to decide the levels of the texts. When the relationship between Coh-metrix Readability Level, traditional formulae and the texts levels in the website was analysed via SPSS, it was found that there was weak negative correlation between Flesch-Kincaid Grade Level and the texts levels in the website (-,39). Additionally, there was weak negative correlation between the texts levels in the website and Flesch Reading Ease scores (-,41). However, there was moderate negative correlation between Coh-metrix Readability levels and the texts levels in the website (-,63), where Teacher1 and Coh-metrix Readability levels had very strong positive correlation (,95). It was identified that readability formulae can help L2 teachers when they select texts for their students for teaching and assessment purposes.
Text Complexity and Readability Measures: An Examination of Historical Trends
2014
In recent years, much attention has been given to the readability and complexity of texts. With the move toward the use of Common Core State Standards (CCSS) by many states, policymakers and educators have directed attention to the ability of American students to successfully navigate complex text. This concern with text complexity and readability is not new; in fact, the earliest American attempt to use quantitative measures to examine text began in the late 19 th century.