Greetings!
After the LA Times published effectiveness rankings of 4th and 5th grade teachers in the Los Angeles School District earlier this year, there has been much public debate over the use of value-added models (VAM). A VAM is intended to be a statistical analysis of a teacher's effect on student achievement, taking into account a student's past performance and expected academic growth. While discussions of VAM are not new to educators or policy wonks, a group from the Brown Center for Education Policy at the Brookings Institute recently released a report on some of the questions and concerns surrounding VAM. |
|
Last month, The Brookings Brown Center Task Group on Teacher Quality released a report titled "Evaluating Teachers: The Important Role of Value-Added." The task group included: Steven Glazerman, Mathematica Policy Research; Susanna Loeb, Stanford University; Dan Goldhaber, University of Washington; Douglas Staiger, Dartmouth University; Stephen Raudenbush, University of Chicago; and Grover J. "Russ" Whitehurst, The Brookings Institution.
Highlights from the report:
- Whether value-added information should be a component of teacher evaluation is a different question than how teacher evaluations impact human resource policies and decisions.
- Much of the concern with VAM is over the fear that an effective teacher could be misclassified as ineffective; yet, in many other professional fields, we readily accept that evaluations are not 100% fool-proof and that imprecise measures are often used to make "high stakes decisions that place societal or institutional interests above those of individuals."
- "...the interests of students and the interests of teachers in classification errors are not always congruent..." While there is rightfully concern over effective teachers being misclassified as ineffective, we also need to weigh this against the consequences for students of labeling ineffective teachers as satisfactory.
- "...all decision-making systems have classification error. The goal is to minimize the most costly classification mistakes, not eliminate all of them."
- Rather than holding an unrealistic standard of perfection for teacher evaluations, we should compare value-added models to other forms of teacher evaluation and classification.
- "The question, then, is not whether evaluations of teacher effectiveness based on value-added are perfect or close to it: they are not. The question, instead, is whether and how the information from value-added compares with other sources of information available to schools when difficult and important personnel decisions must be made."
You can download the full report here.
What do you think of the task group's findings? Share your perspective on ChalkBloggers.
Suggested reading for more information on VAM:
- Research Guidance to State Affiliates on Value-Added Teacher Evaluation Systems from the National Education Association
- Problems With the Use of Student Test Scores to Evaluate Teachers from the Economic Policy Institute
|