Testing the metrics
MIT researchers refine yardstick for measuring schools and teachers
In recent years, 14 states in the U.S. have begun assessing teachers and schools using Value-Added Models, or VAMs. The idea is simple enough: A VAM looks at year-to-year changes in standardized test scores among students, and rates those students’ teachers and schools accordingly. When students are found to improve or regress, teachers and schools get the credit or the blame.
Perhaps not surprisingly, however, VAMs have generated extensive debate. Proponents say they bring accountability and useful metrics to education evaluation. Opponents say standardized tests are likely to be a misleading guide to educator quality. Although VAMs often adjust for some differences in student characteristics, educators have argued that these adjustments are inadequate. For example, a teacher with many students trying to overcome learning disabilities may be helping students improve more than a VAM will indicate.
A new study by an MIT-based team of economists has developed a novel way of evaluating and improving VAMs. By taking data from Boston schools with admissions lotteries, the scholars have used the random assignment of students to schools to see how similar groups of students fare in different classroom settings.
“Value-added models have high stakes,” says Josh Angrist, the Ford Professor of Economics at MIT and co-author of a new paper detailing the study. “It’s important that VAMs provide a reliable guide to school quality.”