What Does the Research Say About Value-Added Models? (original) (raw)

Value-Added Research in Education: Reliability, Validity, Efficacy, and Usefulness (Wing Institute Original Paper

Value-Added Research in Education: Reliability, Validity, Efficacy, and Usefulness (Wing Institute Original Paper), 2020

Value-added modeling (VAM) is a statistical approach that provides quantitative performance measures for monitoring and evaluating schools and other aspects of the education system. VAM comprises a collection of complex statistical techniques that use standardized test scores to estimate the effects of individual schools or teachers on student performance. Although the VAM approach holds promise, serious technical issues have been raised regarding VAM as a high-stakes instrument in accountability initiatives. The key question remains: Can VAM scores of standardized test scores serve as a proxy for measuring teaching quality? To date, research on the efficacy of VAM is mixed. There is a body of research that supports VAM, but there is also a body of studies suggesting that model estimates are unstable over time and subject to bias and imprecision. A second issue with VAM is the sole use of standardized tests as a measure of student performance. Despite these valid concerns, VAM has been shown to be valuable in performance improvement efforts when used cautiously in combination with other measures of student performance such as end-of-course tests, final grades, and structured classroom observations.

Value of Value-Added Models Based on Student Outcomes to Evaluate Teaching

2016

PrologueFrom the title above, you're probably expecting a spectacularly explosive treatment of the topic with bare-knuckle, mano-a-mano fights, motorcycle and car crashes, SWAT teams smashing through doors, and acrobatic high-speed chases on the Las Vegas Strip. Wait! That sounds more like a day in the life of Jason Bourne. That's not exactly journal fare. What were you thinking?Bubbling Student OutcomesInstead, this article will bludgeon you with pointed statistical issues and psychometric standards introduced in my JFD article two years ago, give or take a day or two (Berk, 2014). That prequel dealt with whether student outcomes should be used as one among 15 possible sources of evidence in formative (teaching and course improvement), summative (annual review, contract renewal, promotion & tenure, teaching awards), and program evaluation decisions (accreditation and accountability). I forgot the title. It was something like "Should Student Outcomes Be Used to Evaluate...