Validating Value-Added Measures of Teacher Performance
Saturday, November 14, 2015 : 10:55 AM
Japengo (Hyatt Regency Miami)
*Names in bold indicate Presenter
Regression-based estimates of a teacher’s impact on student test scores, often referred to as value-added measures, have been growing in importance in education policy. However, the measures are also controversial, in part because of concerns about bias (Rothstein 2010). This paper provides new empirical evidence on the degree of bias in commonly used value added estimators. We exploit a randomized experiment that induced exogenous variation in teacher value added. Using randomization as an instrument, we generate causal estimates of the impact of a one-unit change in value added measures on student test scores. Following Chetty et al. (2014), we refer to the deviation of this impact from one as the forecast bias. Because our estimates are based on modest sample sizes and therefore have limited statistical precision, we consider the findings in the context of a series of new and related findings from the emerging literature that uses experimental or quasi-experimental methods to generate plausibly exogenous variation in teacher value added. Most of these studies address the selection of students into different teachers’ classrooms within schools. Our study, however, is the first to use randomization to address selection between schools. The paper documents a pattern of consistent findings in the emerging literature that value-added estimates have very little forecast bias. Our new results were consistent with this finding for elementary school teachers. We could not reject the null hypothesis that the value-added estimates had zero forecast bias. Our confidence intervals were wider than many of the other estimates in the literature, and our findings for middle school, which were especially imprecise, showed that value-added measures of teacher performance did not predict subsequent teacher performance during the experimental period.