Poster Paper:
What You See is What You Get? Examining the Relationship Between Teachers' Observation Scores and the Fade-out of Teacher Value-added Measures
*Names in bold indicate Presenter
This paper attempts to shed light on this hypothesis by investigating the relationship between teachers’ instructional quality (as measured by classroom observation scores), the persistence of teacher value-added, and students’ longer-run test scores. Using data from a small state, this paper addresses the following research questions: 1) Do teachers’ classroom observation scores explain variation in the persistence of teacher value-added? 2) Do teachers’ classroom observation scores explain variation in students’ longer-run test scores (that is not captured by short-run value-added measures)?
Research suggests that teachers’ classroom observation scores capture different aspects of teachers’ performance than do value-added measures. Observation scores could potentially capture aspects of teaching that contribute to persistence of teacher-imparted knowledge, yet are not captured by short-run value-added measures. If so, teachers with higher observation scores may have higher persistence of teacher effects.
Furthermore, observation scores could potentially capture aspects of teaching that are not reflected in short-run test scores or short-run value-added measures, but eventually lead to higher longer-run test scores. If so, teachers’ observation scores may explain variation in students’ longer-run test scores that is not captured by short-run value-added measures.
Understanding how to improve methods that estimate and incorporate teachers’ lasting impacts may improve alignment between teacher instruction and desired outcomes for long-term student learning. Many teacher evaluation systems use multiple measures of teacher performance, including student test scores and classroom observations. Investigating the additional information provided by observation scores and whether they provide insight into teachers’ impact on longer-term knowledge can inform policy decisions regarding the relative weights of these two components within a teacher evaluation system.
In this paper, I attempt to contribute to a larger body of research that seeks to improve measurement of teacher quality by incorporating teachers’ lasting impacts on students’ test scores. The analyses in this paper provide some evidence to suggest that, in math, teachers’ observation scores explain variation in students’ longer-run test scores that is not captured by the persistence of teacher value-added.