A New Scoring Method for Item Response Theory Analysis of C-Tests (original) (raw)
This study aimed to propose a new method for scoring C-Tests as measures of general language proficiency. In this approach, the unit of analysis is sentences rather than gaps or passages. That is, the gaps correctly reformulated in each sentence are aggregated as sentence score, and then each sentence is entered into the analysis as a polytomous item. Rasch partial credit model is applied to analyze the test. To investigate the effectiveness of the new method, the results of this new strategy were compared with those derived from passage and gap levels as well as combining locally dependent items using the dichotomous Rasch model, the rating scale model, and the partial credit model. The obtained scores from the administration of a C-Test comprising four English passages, each containing 25 gaps, to 160 participants were subjected to dichotomous and polytomous Rasch model analyses. The models were compared regarding individual item fit, person/item separation and reliability, global fit (e.g., deviance), unidimensionality, local item dependence (LID), and person parameters. Results showed the effectiveness of the new scoring method. The suggested strategy increases the number of items compared to the super-item approach, provides information at the sentence level, and reduces the impact of LID.