Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Critical Conversations: Experimental Evidence on Improving Evaluator Feedback to Teachers

Friday, November 13, 2015 : 9:10 AM
Tequesta (Hyatt Regency Miami)

*Names in bold indicate Presenter

Matthew Kraft, Brown University
The potential to transform teacher evaluation into a process for supporting the professional development of teachers rests largely on the ability of the evaluators.  In most districts, schools administrators are tasked with primary responsibility for conducting classroom observations and providing evaluation feedback.  Many of these administrators have had little-to-no training on how to provide specific and actionable feedback to teachers about their instructional practice.  This study examines the potential for the evaluation process to improve teacher practice and student achievement by strengthening the quality of feedback teachers receive.  From 2012 to 2014, I worked in partnership with a large urban school district to develop and implement a new evaluator training program intended to build evaluators’ capacity to provide effective feedback through the evaluation process. 

Capacity constraints required that the district offer the six-week training program to groups of evaluators across two different years.  I exploit this staggered rollout to estimate the short-term causal effect of the training program by randomly assigning school-based evaluation teams to attend a training offered in either the first or the second year. Evaluation teams assigned to the first year serve as the treatment group while teams assigned to the second year serve as the control group.  I estimate the impact of the training program on a range of outcomes including: the frequency and length of post-observation meetings, the distribution of teachers’ evaluation ratings, the quality of feedback teachers received, the school climate, teachers’ career intentions, and school performance measures based on student achievement. 

I find that evaluators who complied with the random assignment to attend the training program rated the training substantially higher than other professional development activities they have participated in, improved on a range of self-assessed evaluation practices pre-post, and could list specific things they were doing differently because of the training.  Intent-to-treat estimates reveal the training program increased the frequency of in-person meetings with teachers and the length of feedback evaluators provided in written feedback.  I also find that teachers in schools where evaluators participated in the training rated the evaluation feedback they received as more specific and actionable.