Panel Paper: Teaching to the (Common Core) Test: Causal Effects of a CCSS-Aligned Assessment on Teachers’ Practice

Saturday, November 10, 2018
Marriott Balcony B - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Jessalynn James, University of Virginia


The past decade has brought significant changes to state and district education policy, chief among these being the implementation of rigorous teacher evaluation systems and the near-nationwide adoption of the Common Core State Standards (CCSS). The CCSS, with a focus on depth of knowledge and high-order skills, represent an extraordinary shift in expectations for students’ learning, and theoretically in the teaching practices necessary to meet these new expectations. This transition occurred alongside the introduction of more rigorous and complex teacher evaluation systems, which provide more nuanced data about teachers’ performance and practice.

In the 2014-15 academic year (AY), the District of Columbia Public Schools (DCPS), which had previously adopted both a rigorous teacher evaluation system and the CCSS, implemented a CCSS-aligned assessment of student achievement developed by the Partnership for the Assessment of Readiness for College and Careers (PARCC), in place of its former test, the DC Comprehensive Assessment System (CAS). The CCSS and the PARCC exam were thought to represent a tremendous shift in instruction and expectations for student learning (Student Achievement Partners, 2013 & 2014), and in turn teaching, as assessments are themselves important drivers of teachers’ practice decisions (Cunningham, 2014; Jennings & Lauen, 2016). Indeed, in a survey of educators across five states that adopted the new assessments, large majorities of teachers reported changing their instruction in response at least in part to the new assessments (Kane, Owens, Marinell, Thal, & Staiger, 2016).

Building on other work (Dee, James, Phipps, & Wyckoff, 2017), which shows that certain teaching practices are differentially important for student achievement on the PARCC exam, I explore whether teachers changed their practice in response to the new exam. To answer this question, I use a rich set of DCPS administrative data from AY2009-10 through AY2015-16, which include teachers’ performance on a classroom observation measure, as well as other quality measures and teacher characteristics, alongside rich school- and student-level data. I use a comparative interrupted time series design to estimate effects on teachers’ practice upon the transition to the new assessment for teachers in PARCC-tested subjects and grades relative to other general education teachers in DCPS who did not experience a comparable change.

Preliminary results suggest a significant drop in overall observed teaching quality for teachers in tested grades and subjects across the transition. I use descriptive analyses to investigate which factors explain this drop in performance—whether it is driven by particular teaching domains, heterogeneity across teachers (e.g., novice versus experienced), or perhaps changes in how evaluators operationalized the evaluation rubric in the context of a new assessment.

Much has been made of the implications of the CCSS—and, by extension, CCSS-aligned exams such as PARCC—for student learning, yet we lack compelling, empirical evidence of whether the push for higher, more rigorous standards has in fact changed how students are taught. If there are different treatment effects across the two exams, it might shed light on the malleability of teaching under the CCSS; or, it might reveal unintended consequences for teachers’ practice.