Azizi, M. & Majdeddin, Kh. (2014). On the validity of IELTS writing component; Do raters assess what they are supposed to? (original) (raw)
2014, Modern Journal of Language Teaching Methods
ABSTRACT Validity is a crucial test quality, and presenting a strong validity argument is a must and an ongoing process in the development of large-scale language tests such as IELTS and TOEFL However, the presented validity evidence for writing and speaking skills, whose evaluation is subjective by nature, is somewhat shaky in comparison with other two skills. The present study was an attempt to examine whether raters are actually assessing test takers’ writing samples based on the constructs defined in the scoring rubric. Using a standard multiple regression, the predictive ability of three objective measures, namely Fluency, Grammatical complexity, and Accuracy, were checked for learners’ scores in IELTS task 2 in writing. The preliminary analysis showed no violation of the assumptions underlying the use of the multiple regression test. The results indicate that the model explains 50% of the variance in the dependent variable, i.e., learners' scores in IELTS Task 2 in writing (adjusted R2 = .501) which was found statistically significant: F (3, 37) = 14.40, p < .001. However, among the independent variables, only the accuracy measure had a statistically significant unique contribution to R2 by 40 %, indicating that accuracy of the texts written by L2 learners is the most important factor affecting the scores they receive in the writing task in IELTS. It seems that raters are so heavily affected by the accuracy of texts written by test takers that they ignore other text qualities specified in the scoring rubric. KEYWORDS: IELTS writing test, Validity, Fluency, Grammatical complexity, Accuracy