Thursday, November 6, 2014
:
11:15 AM
Aztec (Convention Center)
*Names in bold indicate Presenter
The use of teacher value-added models to measure teacher effectiveness is expanding rapidly, with teacher value-added estimates being incorporated into teacher evaluation systems and potentially high-stakes decisions. In some settings, attempts are made to account for precision, such as when the standard errors are explicitly incorporated in constructing performance quantiles. However, we still know little about the precision of these value-added estimates, and whether the standard errors are calculated correctly when precision is addressed. The nested nature of administrative education data leads to several clustering options when computing the standard errors and there is little or no research providing guidance as to which method is most appropriate in this particular setting. Our study aims to fill this gap in the literature. We first use simulated student achievement data to study the behavior of standard errors for teacher value-added estimates under various types of student, teacher, and school settings. We then use student-level administrative data to shed light on real world applications. Results show that the standard errors can be quite sensitive to the formula one chooses to calculate them, meaning the policy conclusions drawn from using the value-added estimates themselves may depend on that decision. Hence, knowledge of the reliability of value-added models could be a critical part of the decision making process by administrators and policy makers as well as shape future research on teacher value-added.