Panel Paper:
The Common Core Conundrum: To What Extent Should We Worry That Test and Curriculum Changes Will Affect Test-Based Measures of Teacher Performance?
*Names in bold indicate Presenter
Two important pillars in Race to the Top, a multi-billion dollar effort by the federal government to encourage reform and improvement in America’s schools, are performance-based reviews for teachers and the implementation of Common Core State Standards (CCSS). The proposed simultaneous evaluation of teachers with the transition to the Common Core has generated considerable controversy among educators and policymakers alike. A central objection to the implementation plan is the perceived unfairness of holding teachers accountable for results on the initial year of a new assessment that is designed to be a more rigorous test of student learning.
A variety of policymakers and practitioners, and most prominently the teachers unions, have argued that teachers need more time to develop lessons and learn about the new tests before being held accountable for student performance. And in response to these concerns about transitioning to new standards, Secretary of Education Arne Duncan recently announced a one-year moratorium on the use of test-based teacher evaluations. Are these concerns well-founded? It is not possible a priori to know the extent to which the CCSS curriculum and testing changes will result in meaningful impacts on judgments about teacher performance. But curriculum and assessment changes are not new. Prior to CCSS, states routinely revised standards and implemented new assessments. For example, North Carolina, one of the sites for this study, revised its curriculum and associated assessments on a recurring five-year schedule, with a previous revision described as a “drastic change in the curriculum”.
A handful of recent studies have investigated the stability of value added measures over time, across calculation methods, and across schools. Many of these studies find that a sizeable portion of a teacher’s performance persists over time, in different classrooms, and in different schools. However, less is known about whether successful teachers under one curriculum and testing regime continue to be successful after the implementation of a new regime. We address this issue in this paper, reporting on research assessing the extent to which student test-based measures of teacher performance are affected by curriculum and testing changes. Specifically, we use longitudinal data from North Carolina and Washington, which implemented test changes that allow for research exploring the stability of teacher value-added measures across test regimes, relative to the stability within a given regime.
We assess three related concerns about the use of value-added models during testing transition years. First, in the first year of the new assessment, states and districts will be forced to rely on a different standardized assessment as a pretest score in value-added models. If these tests measure different skills than the new tests, value-added measures of teacher effectiveness on the new assessment may be biased. Second, we test the extent to which different tests measure different teaching skills and how teachers’ rankings might change under a new assessment regime by estimating the stability of teacher rankings during transition years. Finally, we test the predictive validity of transition-year VAMs for effectiveness in subsequent years.