The contribution of lexical variables to the measurement of Italian texts' readability level (original) (raw)
Related papers
Predicting Readability of Texts for Italian L2 Students: A Preliminary Study
ALTE 2017 Conference Proceedings: Learning and Assessment: Making the Connections (Bologna, 3–5 May 2017) , 2017
Text selection and comparability for L2 students to read and comprehend are central concerns both for teaching and assessment purposes.Compared to subjective selection. quantitative approaches provide more objective information, analysing texts at language and discourse level (Khalifa & Weir, 2009). Readability formulae such as the Flesch Reading Ease, the Flesch-Kincaid Grade Level and, for Italian, the GulpEase index (Lucisano and Piemontese, 1988), do not fully addressed the issue of text complexity. A new readability formula called Coh-Metrix was proposed (Crossley, Geenfield, & McNamara 2008), which takes into account a wider set of language and discourse features. A similar approach was proposed to assess readability of Italian texts through a tool called READ-IT (Dell’Orletta, Montemagni, & Venturi 2011). While READ-IT was tested on newspaper texts randomly selected, this contribution focuses on the development of a similar computational tool applied on texts specifically selected in the context of assessing Italian as L2. Two text corpora have been collected from the CELI (Certificates of Italian Language) item bank at B2 and C2 level. Statistical differences in the occurrence of a set of linguistic and discursive features have been analysed according to four different categories: length features, lexical features, morpho-syntactic features, and discursive features.
Assessing document and sentence readability in less resourced languages and across textual genres
ITL - International Journal of Applied Linguistics, 2014
In this paper, we tackle three underresearched issues of the automatic readability assessment literature, namely the evaluation of text readability in less resourced languages, with respect to sentences (as opposed to documents) as well as across textual genres. Different solutions to these issues have been tested by using and refining READ‑IT, the first advanced readability assessment tool for Italian, which combines traditional raw text features with lexical, morpho-syntactic and syntactic information. In READ‑IT readability assessment is carried out with respect to both documents and sentences, with the latter constituting an important novelty of the proposed approach: READ‑IT shows a high accuracy in the document classification task and promising results in the sentence classification scenario. By comparing the results of two versions of READ‑IT, adopting a classification‑ versus ranking-based approach, we also show that readability assessment is strongly influenced by textual g...
The Importance of Vocabulary and Grammar for Measuring the Readability and Language Level
Journal of Education and Teaching (JET), 2024
The purpose of this research is to determine the level of difficulty (Readability) and the language level of texts written by non-Italian speakers. For this purpose, texts produced by Greek candidates (B1 and B2 level) according to the Greek State Certificate Exam (KPG) for the Italian language were selected. All data was collected from the KPG exams of May 2015 and November 2016. Specifically, from 1000 randomized KPG notebooks, a total of 80 notebooks were used, (B1 and B2) that were first digitized in manual form. In the second and third phase, these texts were measured using the READ-IT and SPSS.24 tool. The results lead to the fact that both the correct use of vocabulary, i.e., spelling and the appropriate vocabulary in relation to the content of the text, determine the language level and degree of difficulty of produced texts. All results are part of an existing tool named trat.exe used by the University EKPA and Aristotle to measure the Readability regarding the exams of Italian language of KPG. Of utmost importance would be the future deepening of the parameters of writing with the goal of developing even more advanced software.
Basic elements for determining the language level and the Readability grade in text production
International Conference on Science, Innovations and Global Solutions (Poland), 2024
In order to measure Readability, 316 texts from the state certificate for Italian language in Greece were selected (May 2015 & November 2016). The texts were manually digitized in Word 2010 format and measured through the Read-IT tool. The data processing was carried out using SPSS software 24. Greek candidates at level B produce texts in Italian depending on features interwoven with vocabulary, grammar and syntax. The aim was to form the basis for further research in other language levels (A & C) and different languages which would probably provide important outcomes for language learning by foreign speakers.
Analyzing the Adequacy of Readability Indicators to a Non-English Language
2019
Readability is a linguistic feature that indicates how difficult it is to read a text. Traditional readability formulas were made for the English language. This study evaluates their adequacy to the Portuguese language. We applied the traditional formulas in 10 parallel corpora. We verified that the Portuguese language had higher grade scores (less readability) in the formulas that use the number of syllables per words or number of complex words per sentence. Formulas that use letters by words instead of syllables by words output similar grade scores. Considering this, we evaluated the correlation of the complex words in 65 Portuguese school books of 12 schooling years. We found out that the concept of complex word as a word with 4 or more syllables, instead of 3 or more syllables as originally used in traditional formulas applied to English texts, is more correlated with the grade of Portuguese school books. In the end, for each traditional readability formula, we adapted it to the...
THE RELATIONSHIP BETWEEN READABILITY AND LANGUAGE LEVEL
ISRGJEHL, 2024
Appropriate tools lead to valid and fair measurement, processing and evaluation of texts produced by foreigners in a different language (Elder & Harding, 2008: 341-342). In addition, this data is very important in order to be able to create exam instructions and tests depending on language skills (in this case B1 and B2 based on the Common European Framework of Reference for Foreign Languages) and level of difficulty, i.e. the readability grade (Lenzner, 2014: 678-681). The discovery of such data is necessary for language levels (A1-C2) and for different languages in order to create more advanced text evaluation software (Beacco, 2017: 9-19). Furthermore, the two largest Greek universities EKPA and the University of Aristotle in Thessaloniki already use a very
Read-it: Assessing readability of italian texts with a view to text simplification
2011
In this paper, we propose a new approach to readability assessment with a specific view to the task of text simplification: the intended audience includes people with low literacy skills and/or with mild cognitive impairment. READ-IT represents the first advanced readability assessment tool for what concerns Italian, which combines traditional raw text features with lexical, morpho-syntactic and syntactic information. In READ-IT readability assessment is carried out with respect to both documents and sentences where the latter represents an important novelty of the proposed approach creating the prerequisites for aligning the readability assessment step with the text simplification process. READ-IT shows a high accuracy in the document classification task and promising results in the sentence classification scenario.
TEXT DIFFICULTY: A COMPARISON OF READABILITY FORMULAE AND EXPERTS’ JUDGMENT
Teachers of English, librarians, researchers have been interested in finding the right text for the right reader for many years. In teaching Second Language (L2), text writers often try to fulfil the demand by simplifying the texts for the readers. The emerged term " readability " can be defined as " the ease of reading words and sentences " (Hargis, et al. 1998). The aim of this research was to compare the ways to find the right text for the right reader: traditional readability formulae (Flesch Reading Ease, Flesch-Kincaid Grade Level), Coh-Metrix Second Language (L2) Reading Index, which is a readability formula based on psycholinguistic and cognitive models of reading', and teachers' estimation of grade levels by using leveled texts in a web site. In order to do this, a selection of texts from a corpus of intuitively simplified texts was used (N30). Coh-Metrix Readability levels, Flesch Reading Ease, and Flesch-Kincaid Grade Levels of the texts were calculated via Coh-Metrix Web Tool. Three teachers of English were asked to decide the levels of the texts. When the relationship between Coh-metrix Readability Level, traditional formulae and the texts levels in the website was analysed via SPSS, it was found that there was weak negative correlation between Flesch-Kincaid Grade Level and the texts levels in the website (-,39). Additionally, there was weak negative correlation between the texts levels in the website and Flesch Reading Ease scores (-,41). However, there was moderate negative correlation between Coh-metrix Readability levels and the texts levels in the website (-,63), where Teacher1 and Coh-metrix Readability levels had very strong positive correlation (,95). It was identified that readability formulae can help L2 teachers when they select texts for their students for teaching and assessment purposes.
The linguistic assumptions underlying readability formulae: a critique.
This article critically examines some of the linguistic assumption underlying the readability formulae that are commonly used in schools and by publishing houses. Do these assumptions really enable readability formulae to offer a sound, scientific way of evaluating the difficulty of texts? This paper examines the linguistic criteria that form the basis for readability scores and argues that the criteria commonly used in readability formulae do not constitute a satisfactory basis for assessing reading difficulty
Readability of Texts: Human Evaluation Versus Computer Index
mcser.org
This paper reports a study which aimed at exploring if there is any difference between the evaluation of EFL expert readers and computer-based evaluation of English text difficulty. 43 participants including university EFL instructors and graduate students read 10 different English passages and completed a Likert-type scale on their perception of the different components of text difficulty. On the other hand, the same 10 English texts were fed into Word Program and Flesch Readability index of the texts were calculated. Then comparisons were made to see if readers' evaluation of texts were the same or different from the calculated ones. Results of the study revealed significant differences between participants' evaluation of text difficulty and the Flesch Readability index of the texts. Findings also indicated that there was no significant difference between EFL instructors and graduate students' evaluation of the text difficulty. The findings of the study imply that while readability formulas are valuable measures for evaluating level of text difficulty, they should be used cautiously. Further research seems necessary to check the validity of the readability formulas and the findings of the present study.