Panel Paper: Does Evaluation Distort Teacher Effort and Decisions? Quasi-Experimental Evidence from a Policy of Retesting Students

Saturday, November 9, 2019
Plaza Building: Concourse Level, Governor's Square 15 (Sheraton Denver Downtown)

*Names in bold indicate Presenter

Esteban Aucejo, Arizona State University, Teresa Romano, Emory University and Eric Taylor, Harvard University


Performance evaluation may change employee effort and decisions in unintended ways, for example, in multitask jobs where the evaluation measure captures only a subset of (differentially weights) the job tasks. We show evidence of this multitask distortion in schools, with teachers allocating effort across students (tasks). Teachers are evaluated based on student test scores; students who fail the test are retested 2-3 weeks later; and only the higher of the two scores is used in the teachers’ evaluations. This retesting feature creates a sharp difference in the returns to teacher effort directed at failing versus passing students, even though both barely failing and barely passing students have arguably equal educational claim on (returns to) teacher effort. Using RD methods, we show that students who barely fail the end of school-year math test, and are then retested, score higher one year later (t+1) compared to those who barely pass. This difference in scores occurs during the four years of the retest policy, but not in the years before or after. We find no evidence that the results arise from retesting per se, or from changes in students’ own behavior alone. The results suggest teachers give more effort to some students (tasks) simply because of the evaluation system incentives.