Panel Paper: In Search of High-Quality Evaluation Feedback: An Administrator Training Field Experiment

Saturday, November 9, 2019
Plaza Building: Concourse Level, Governor's Square 17 (Sheraton Denver Downtown)

*Names in bold indicate Presenter

Matthew A. Kraft and Alvin Christian, Brown University


Over the last decade, nearly every state in the U.S. has implemented major reforms to their teacher evaluation systems (Donaldson & Papay, 2015; Steinberg & Donaldson, 2016). A twofold theory of action motivated these reforms: differentiating teacher performance for accountability (Hanushek, 2009; Thomas, Wingert, Conant, & Register, 2010) and promoting professional development through classroom observations and feedback (Almy, 2011; Curtis & Wiener, 2012; Papay, 2012). Most states and districts have emphasized the latter goal of improving teachers’ instruction (Center on Great Teachers and Leaders, 2014). Several years into this national policy experiment, we still know little about the quality of feedback teachers receive as part of new high-stakes evaluation systems.

In this paper, we examine teachers’ perceptions of the feedback they receive as part of Boston Public Schools’ (BPS) teacher evaluation system and evaluate the district’s efforts to strengthen the quality of this feedback. In the 2011-12 academic year, BPS implemented major reforms to its evaluation system, with a focus on using the evaluation process as a tool for teacher development. The following year, BPS convened a group of experienced administrators to develop and pilot a multi-day evaluator training program intended to improve the quality of feedback administrators provide to teachers. We evaluate the implementation and effects of this intensive 15-hour evaluator training series by exploiting the staggered rollout of the program across two academic years.

Analyzing the quality of teacher feedback generated as part of high-stakes evaluation systems, the correlates of high-quality feedback, and the potential to improve this feedback advances our understanding of a key mechanism in the theory of action for how evaluation reforms were intended to promote teacher development. We find that BPS teachers generally thought that evaluators were fair and accurate raters, but viewed the quality of feedback they received less favorably. Ultimately, just over a quarter of teachers felt that their instruction improved because of this feedback.

We next explore what teacher, evaluator, and school characteristics are correlated with feedback that teachers perceived to be of high quality and find that less-experienced teachers are more likely to rate the feedback they receive as higher-quality, and that evaluators with longer tenures in their current schools are perceived to provide better feedback. We also find that teachers of color who are evaluated by administrators of the same race report receiving substantially higher-quality feedback than others.

The evaluator training program we evaluate was reasonably well attended by administrators and implemented successfully based on several metrics. However, we find little evidence to suggest that the intensive training program was successful at improving the perceived quality of evaluation feedback or affected the frequency or duration of the observation and feedback cycles administrators conducted in the short or medium term.

Together, our descriptive and causal evidence shed new light on the potential and limitations of promoting professional development through the teacher evaluation process. These findings can inform states’ and districts’ ongoing efforts to redesign their teacher evaluation systems under the increased flexibility provided by Every Student Succeeds Act (ESSA).

Full Paper: