Value Added Models – Equity and Exceptional-ism or Evaluation?

Anyone in education who has been introduced to the term value-added modeling likely is aware of how difficult a topic it is when it has been applied to measuring teachers.  I was at the American Federation of Teachers’ (AFT) bi-annual TEACH2011 conference this week and it was a subject of intense interest as AFT has been active in trying to define what makes a good value-added model from a poor one and also how these models might be used to make productive decisions about teachers. For more information about value-added models or VAMs as some call them, see here.

What struck me at this meeting was how the statistical expert(s) trying to explain the VAM to AFT members kept focusing on its technical sophistication and how important it is to develop the model to be used with clean and reliable data. No doubt this is true. A model built with too few parameters or the bad data will show the wrong results with potentially disastrous consequences as the recent suicide of a Los Angeles teacher showed when the paper published teachers value-added scores.  It is also the case that getting good data for VAMs can be exceedingly difficult for a variety of reasons.  But, is seemed to me that these experts also missed an opportunity to explain why VAMs may make sense; why they can be good for teachers and kids; and, why so many people may be interested in their use even if they cannot be reliably used as a predominant or sole evaluation measure for teachers.

From my perspective, the true power of value-added models is in their ability to factor in student characteristics that all teachers know have a huge impact on how effective a teacher can seem to be in terms of developing high-achievement students.  When built properly with good data they can be more sensitive to the conditions teachers work in providing a more equitable view than the achievement scores from any given year of teachers can show.  With further development, perhaps they can show how teachers can effect the following years for students.  These models can also be used to show how some teachers beat the odds.  In my dissertation study a few years back, I was struck by a phrase several people used about the data.  They said the data showed which teachers could teach “those kids.”  We all knew who those kids were; they were the likely dropouts (as I was) and the ones that schools often seem to fail. VAM’s can be a tool to help identify exceptional teachers who are beating the odds with some kids.  Even if that information cannot be used for high-stakes (tenure, retention) decisions, it could be very valuable systemically in helping other teachers connect with and learn from those who have found ways to reach those who are at the greatest risk.  These are good things for education that the technical explanations don’t get at.

Dr. Piety is a national expert in educational data. He is on the faculty of the Robert H. Smith School of Business at the University of Maryland where he teaches information and database technologies, cloud computing, and social media. He is the author of Assessing the Educational Data Movement, a book on using data to improve school success.

Posted in Value-added models

Leave a Reply

Your email address will not be published. Required fields are marked *