Panel Paper: Education Reform and Student Achievement in the District of Columbia: Three Sources of Evidence

Saturday, November 8, 2014 : 2:45 PM
Aztec (Convention Center)

*Names in bold indicate Presenter

Melinda Adnot1, Thomas Dee2, Veronica Katz1 and Jim Wyckoff1, (1)University of Virginia, (2)Stanford University
Teacher evaluation and performance pay have increasingly been championed as a methods for improving student achievement. These systems are hypothesized to work  through both positive selection and improvement in existing teachers.  Although there is a long history of performance pay in education, most of these initiatives have not been sustained for long and have generally been viewed as unsuccessful (Cohen and Murnane, 1986). For instance, evaluation of merit-pay experiments in Nashville, New York, and Chicago found that teacher performance did not increase as a result of the performance pay incentives (Fryer, 2013; Glazerman and Seifullah, 2012; Springer et al., 2010). In contrast to these null findings, Dee and Wyckoff (2013) present causal evidence that IMPACT, the District of Columbia teacher evaluation system has been effective in improving teacher performance for teachers confronting strong dismissal and financial incentives. However, Dee and Wyckoff do not demonstrate that the IMPACT induced improvement in teacher performance results in improved student achievement.

In an effort to evaluate the broader effects of IMPACT on student achievement, we pursue a differences-in-differences identification strategy that compares achievement in DCPS schools before and after the implementation of IMPACT with the contemporaneous changes in achievement in three different comparison groups of non-DCPS schools. Here, the relevant treatment contrast turns on the summative effect of DCPS reforms implemented during the IMPACT era (e.g., teacher evaluation and pay, school and district leadership and climate). Our analysis will effectively compare the pre/post-reform changes in DCPS to the contemporaneous changes observed in: (1) student-level data from DCPS and DC charter schools, (2) school-level data DCPS and Baltimore City and Prince Georges County public schools in Maryland, and (3) district-level NAEP data available over several years for DCPS and other districts participating in the Trial Urban District Assessments (TUDA). Each of these potential comparison groups has limitations as an appropriate counterfactual. However, these limitations differ across comparison groups. In combination, these results provide a robust examination of IMPACT and DCPS reforms by leveraging different treatment contrasts across alternative data sources.

References:

Dee, T., & Wyckoff, J. (2013). Incentives, selection, and teacher performance: Evidence from IMPACT (Working Paper No. w19529). Cambridge, MA: National Bureau of Economic Research.

Fryer, R. (2013). “Teacher Incentives and Student Achievement: Evidence form New York City Public Schools,” Journal of Labor Economics 31(2): 373-427.

Glazerman, S. & Seifullah, A. (2012). An Evaluation of the Chicago Teacher Advancement Program (Chicago TAP) After Four Years. Princeton, NJ: Mathematica Policy Research.

Murnane, R. J., & Cohen, D. K. (1986). Merit pay and the evaluation problem: Why most merit pay plans fail and a few survive. Harvard educational review,56(1), 1-18.

Springer, M. G., Ballou, D., Hamilton, L., Le, V. N., Lockwood, J. R., McCaffrey, D. F., Pepper, M., & Stecher, B. M.  (2010). Teacher Pay for Performance: Experimental Evidence from the Project on Incentives in Teaching (POINT). Nashville, TN: National Center on Performance Incentives at Vanderbilt University