What are Value-added models (VAMs)? They are a kind of statistical model that has been used to evaluate the performance of teachers. VAMs have been developing since the 1990 and have been seen as an alternative to the achievement model (AYP) used in No Child Left Behind (NCLB). NCLB featured the adequate yearly progress (AYP) indicator. AYP is an achievement model that tracks how a school’s scores differ from year to year and not how the students themselves changed. AYP compares different groups of students that are in the same grade and same school, but in different years. When the demographics of a school change, when more or less advantaged students enter, the AYP scores can naturally shift. These shifts, however, have less to do with the teaching and more to do with the characteristics of students that are being tested. AYP was also a school-based indicator. While school leaders may have considered how the scores from one teacher related to another, the policy did not consider this. The policy focused on schools and subgroups. And, many districts did not link the data systems to be able to systematically support teacher-level analysis.
VAMs, conversely, are organized around students and their teachers. A VAM is a kind of growth model. Growth models track individual students and how their scores change over time. They are less sensitive to demographic shifts because the same students are being compared across grades rather than the different students across the same grades. Simple growth models can show how students have progressed in a given year (with assessments that are linked from year to year). But they are not sensitive to differences in students and the fact that some students because of home life and/or prior knowledge are more likely to make gains than others.
A VAM provides this extra capability to statistically account for the differences in students. It is designed to show the gains students make when being taught by a teacher compared to the gains they could be expected to make given the characteristics that research has showed affect student performance. A VAM models tracks changes in student scores while also factoring in the characteristics that have been shown to have an effect on achievement, including prior knowledge and socio-economic status. A VAM can then be used to quantify the improvements in learning that are associated with each teacher. Because these models statistically factor in the growth that can be expected from students with specific backgrounds, they can be used to compare teachers who teach students from differing circumstances. AYP can’t do this.
VAMs have both supporters and critics. The critics of VAMs focus on how they can have a very high amount of year over year variation. Some teachers ranked in the highest 25% of teachers one year may be in the lowest 25% the next so that looking at any one year’s scores could lead to potentially misleading conclusions. Critics cite multiple data quality problems and other technical difficulties with using VAMs in practice. Supporters contend that all measurement systems are imperfect and that the benefits of VAMs outweigh their limitations, especially if they are combined with other measures to support thoughtful decisions. At the same time, over several years and along with other related data, the value-added model does help show important differences in teachers’ abilities.
In 2009/2010, VAMs became a greater concern for many in education when the Obama Administration decided they were a good thing for education and that as part of the $4.3billion Race-to-the-Top program that states needed to link their teacher and student systems in order to compete for their share in this stimulus fund. Quickly, researchers began to present the case against the use of such data (one example) and supporters presented the case for their use (a rebuttal). Expect VAMs to appear in this and other forums frequently