*Names in bold indicate Presenter
To make the problem concrete, we use a hypothetical district with math and ELA achievement test scores for 25,000 students per year in grades 3 through 8. We posit the availability of six years of data, with an intervention that dramatically changes the distribution of teaching effectiveness following year three. The objective is to obtain the best estimate of parameters describing the relationship between teaching value-added and student characteristics prior to the intervention and of how this relationship changes following the intervention. We assume that the intervention changes not only the distribution of students among teachers and teachers among schools, but also changes the value-added of individual teachers.
We simulate the properties of several popular estimators under various assumptions regarding the data generating process. For each set of assumptions about the sorting of students and teachers, we derive the "true" value of the parameters describing the relationship of effective teaching with student characteristics and test the bias and precision of each estimator under each scenario.
The simplest data generating process that we examine is based on a cognitive production function in which all variation among students is completely determined by the identity of their teacher and an unobserved factor that is uncorrelated among students, uncorrelated with observed information and uncorrelated over time. We assume that knowledge decays at a constant, homogeneous rate and that knowledge is perfectly measured by the end-of-year test score. Therefore, all previous inputs are captured by the lagged score:
(1) Y_icjt = a + b Y_i,t-1 + T_jt + e_icjt
where Y_icjt is the test score of student i in class section c taught by teacher j in year t. The contribution of teacher j in year t is represented by T_jt. The true relationship of teacher effectiveness and a student characteristic of interest is given by the correlation of T_jt and D_it, which can be understood as a dummy variable indicating disadvantage.
We modify (1) in a variety of ways: For example, we assume a) D_it has a direct impact on Y_icjt, both for the individual student and for the average value of D_it in a classroom and that D_it is correlated with T_jt at the classroom level, etc. We then examine bias in the correlation of T_jt and D_it using several popular value-added modeling approaches, including aggregated residuals, teacher random effects and fixed effects, feasible generalized least squares, and models that include student as well as teacher fixed effects. In addition, we examine how each model performs with and without controls for student-level and classroom level measures of student disadvantage.