Panel Paper:
Critical Conversations: Experimental Evidence on Improving Evaluator Feedback to Teachers
*Names in bold indicate Presenter
Capacity constraints required that the district offer the six-week training program to groups of evaluators across two different years. I exploit this staggered rollout to estimate the short-term causal effect of the training program by randomly assigning school-based evaluation teams to attend a training offered in either the first or the second year. Evaluation teams assigned to the first year serve as the treatment group while teams assigned to the second year serve as the control group. I estimate the impact of the training program on a range of outcomes including: the frequency and length of post-observation meetings, the distribution of teachers’ evaluation ratings, the quality of feedback teachers received, the school climate, teachers’ career intentions, and school performance measures based on student achievement.
I find that evaluators who complied with the random assignment to attend the training program rated the training substantially higher than other professional development activities they have participated in, improved on a range of self-assessed evaluation practices pre-post, and could list specific things they were doing differently because of the training. Intent-to-treat estimates reveal the training program increased the frequency of in-person meetings with teachers and the length of feedback evaluators provided in written feedback. I also find that teachers in schools where evaluators participated in the training rated the evaluation feedback they received as more specific and actionable.