Automatic assessment of children's reading level (original) (raw)

Automatically Assess Children’s Reading Skills

2020

Assessing reading skills is an important task teachers have to perform at the beginning of a new scholastic year to evaluate the starting level of the class and properly plan next learning activities. Digital tools based on automatic speech recognition (ASR) may be really useful to support teachers in this task, currently very time consuming and prone to human errors. This paper presents a web application for automatically assessing fluency and accuracy of oral reading in children attending Italian primary and lower secondary schools. Our system, based on ASR technology, implements the Cornoldi’s MT battery, which is a well-known Italian test to assess reading skills. The front-end of the system has been designed following the participatory design approach by involving end users from the beginning of the creation process. Teachers may use our system to both test student’s reading skills and monitor their performance over time. In fact, the system offers an effective graphical visual...

Developing an automatic assessment tool for children's oral reading

Proceedings of the 9th …, 2006

Automation of oral reading assessment and of feedback in a read-ing tutor is a very challenging task. This paper describes our re-search aiming at developing such automated systems. First topic is the recording and annotation of CHOREC, the Flemish database of children's oral ...

A System for Asessing Children Readings as School

7th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2017)

In this paper we describe a system for analyzing the reading errors made by children of the primary and middle schools. To assess the reading skills of children in terms of reading accuracy and speed, a standard reading achievement test, developed by educational psychologists and named "Prove MT" (MT reading test), is used in the Italian schools. This test is based on a set of texts specific for different ages, from 7 to 13 years old. At present, during the test, children are asked to read aloud short stories, while teachers manually write down the reading errors on a sheet and then compute a total score based on several measures, such as duration of the whole reading, number of read syllables per second, number and type of errors, etc. The system we have developed is aimed to support the teachers in this task by automatically detecting the reading errors and estimating the needed measures. To do this we use an automatic speech-totext transcription system that employs a language model (LM) trained over the texts containing the stories to read. In addition, we embed in the LM an error model that allows to take into account typical reading errors, mostly consisting in pronunciation errors, substitutions of syllables or words, word truncation, etc. To evaluate the performance of our system we collected 20 audio recordings, uttered by 8-13 years old children, reading a novel belonging to "Prove MT" set. It is worth mentioning that the error model proposed in this paper for assessing the reading capabilities of children performs closely to an "oracle" error model obtained from manual transcriptions of the readings themselves.

Performance of Automated Scoring for Children’s Oral Reading

2011

For adult readers, an automated system can produce oral reading fluency (ORF) scores (e.g., words read correctly per minute) that are consistent with scores provided by human evaluators (Balogh et al., 2005, and in press). Balogh's work on NAAL materials used passage-specific data to optimize statistical language models and scoring performance. The current study investigates whether or not an automated system can produce scores for young children's reading that are consistent with human scores. A novel aspect of the present study is that text-independent rule-based language models were employed (Cheng and Townshend, 2009) to score reading passages that the system had never seen before. Oral reading performances were collected over cell phones from 1st, 2nd, and 3rd grade children (n = 95) in a classroom environment. Readings were scored 1) in situ by teachers in the classroom, 2) later by expert scorers, and 3) by an automated system. Statistical analyses provide evidence th...

Automatic assessment of children’s oral reading skills

2020

Fluent reading is a critical component of literacy skills and necessary for overall personality development. Proper assessment and feedback to students are therefore essential. This project is an effort to automate reading skill evaluation. We collected data of English stories read by Indian children who are L2 speakers of English, and asked teachers to rate it on different lexical and prosodic attributes. The ratings were then predicted using a machine learning model trained with different acousticprosodic features. The work aims at determining the optimal set of predictive features for rating the paragraphs for the scoring attributes. The trained machine learning models are expected to accurately mimic the teachers’ ratings on unseen test data.

Evaluation of reading performance of primary school children: Objective measurements vs. subjective ratings

6th Workshop on Child Computer Interaction (WOCCI 2017), 2017

We analyze here readings of the same reference text by 116 children. We show that several factors strongly impact subjective rating of fluency, notably number of correct words, repetitions, errors, syllables spelled per minute. We succeeded in predicting four subjective scores-rated between 1 and 4 by human raters-from such objective measurements with a rather high precision (R > .8 for 3 out of 4 scores). This open the way for automatic multidimensional assessment of reading fluency using calibrated texts.

Determination of Reading Levels of Primary School Students

Universal Journal of Educational Research

In this study, it was aimed to evaluate the reading performances of 2nd, 3rd and 4th graders. The study was designed in a scanning model. The research was conducted with 2nd, 3rd and 4th grade students studying in Bayburt, Turkey. The appropriate reading rates, reading speeds and reading errors of the students were examined by asking them to read a narrative text appropriate to their class. The texts were selected from the books distributed to the schools by the Ministry of National Education. Error Analysis Inventory was used to diagnose reading difficulties of students and to collect data about their reading performances. It was used to determine the reading levels of the readers individually. The present study is important since it identify students with reading difficulties and determines the necessary programs to overcome these difficulties.

Automatic Assessment of Children's L2 Reading for Accuracy and Fluency

This project targets using state-of-the-art in automatic speech recognition technology, coupled with new work in predicting the relevant prosody ratings, to build an oral reading assessment tool. A reliable automatic system can prove invaluable in helping children acquire basic reading skills apart from facilitating the monitoring of literacy programs at large scale. In the present work, we target middle-school learners of English as a second language in a rural Indian setting. We present the design and observed characteristics of our field-collected oral reading dataset to outline the research challenges faced. Recently proposed solutions to the training of robust acoustic models in the face of limited task specific data are evaluated for the prediction of the child's word decoding accuracy and for achieved word-level alignments for prosody scoring. A language model is designed to exploit the known text and observed reading errors while being flexible enough to adapt to new reading material without further training. Based on a scoring rubric proposed by a national mission on literacy assessment in India, we present an automatic system that detects reading miscues and computes fluency indicators at the sentence level which are then correlated with fine-grained subjective ratings by an expert.

Human and automated assessment of oral reading fluency

Journal of Educational Psychology, 2013

This article describes a comprehensive approach to fully automated assessment of children's Oral Reading Fluency (ORF), one of the most informative and frequently administered measures of children's reading ability. Speech recognition and machine learning techniques are described that model the three components of oral reading fluency: word accuracy, reading rate and expressiveness. These techniques are integrated into a computer program that produces estimates of these components during a child's one-minute reading of a grade-level text. The ability of the program to produce accurate assessments was evaluated on a corpus of 783 one-minute recordings of 313 students reading grade-leveled passages without assistance. Established standardized metrics of accuracy and rate (Words Correct Per Minute (WCPM)) and expressiveness (National Assessment of Educational Progress expressiveness scale) were used to compare ORF estimates produced by expert human scorers and automatically generated ratings. Experimental results showed that the proposed techniques produced WCPM scores that were within 3 to 4 words of human scorers across students in different grade levels and schools. The results also showed that computer-generated ratings of expressive reading agreed with human raters better than the human raters agreed with each other. The results of the study indicate that computer generated ORF assessments produce an accurate multidimensional estimate of children's oral reading ability that approaches agreement among human scorers. The implications of these results for future research and near term benefits to teachers and students are discussed.

On the Automatic Classification of Reading Disorders

In this paper, we present an automatic classification approach to identify reading disorders in children. This identification is based on a standardized test. In the original setup the test is performed by a human supervisor who measures the reading duration and notes down all reading errors of the child at the same time. In this manner we recorded tests of 38 children who were suspected to have reading disorders. The data was confronted to an automatic system which employs speech recognition to identify the reading errors. In a subsequent classification experiment -based on the speech recognizer's output and the duration of the test -94.7 % of the children could be classified correctly.