Panel Paper: Evaluating Prospective Teachers: Testing the Predictive Validity of the EdTPA

Saturday, November 5, 2016 : 10:55 AM
Columbia 1 (Washington Hilton)

*Names in bold indicate Presenter

Dan Goldhaber1,2, James Cowan2 and Roddy Theobald2, (1)University of Washington, (2)American Institutes for Research


It is fair to say that teacher education programs (TEPs) are facing greater scrutiny than any time in history. There are a number of new initiatives designed to hold teacher education programs more accountable, either through direct measures of the training they provide students or based on output measures, such as the value added of the teachers who enter the workforce. The focus on teacher education is well-founded given empirical evidence about the large impact of teacher quality on student outcomes. And despite increasing reliance on alternative routes to certification for new teachers, about 75 percent of the roughly 100,000 novice teachers who enter the public school workforce each year are trained in a traditional college or university setting.

One of the ways that TEPs and states have responded to increased accountability pressure is by adopting the edTPA. The edTPA was initially developed by researchers at Stanford University’s Center for Assessment, Learning, and Equity (SCALE) and has been further developed and distributed through a partnership between SCALE, the American Association of Colleges for Teacher Education (AACTE), and Evaluation Systems (a group of Pearson). The edTPA was initially rolled out in two large-scale field tests from 2012-2013, and was “operationally launched” in 2014. It is designed to provide more in-depth (and different) information about teacher candidates than existing licensure tests. Specifically, unlike traditional licensure tests, the edTPA is a portfolio-based assessment akin to the National Board for Professional Teacher Standards (NBPTS) assessment of inservice teachers. It relies on the scoring of teacher candidates who are videotaped while teaching three to five lessons from a unit of instruction to one class of students, along with assessments of teacher lesson plans, student work samples and evidence of student learning, and reflective commentaries by the candidate.

There has been remarkably rapid policy diffusion of this assessment from its initial field testing in 2012 to full implementation: The edTPA is now used by over 600 TEPs in 40 states, and passing the edTPA is a requirement for licensure in 12 states (including Washington State, the focus of this study). Yet despite the rapid adoption of this assessment, critics of the edTPA point out that there is currently no existing large-scale research linking it to outcomes for inservice teachers (either the probability that a candidate enters the workforce or their effectiveness if they do).

We address this gap in the literature using longitudinal data from Washington State that includes a complete history of teacher candidates’ scores on the edTPA to examine the extent to which edTPA performance is predictive of the likelihood of entry into the teacher workforce and value-added measures of teacher effectiveness. Importantly, Washington State set the “cut score” for passing the test lower than the cut-score band recommended by SCALE (and the cut scores set in many other states), so we can test the predictive validity of these scores for many teachers in the Washington State workforce who received edTPA scores that would not have qualified them for certification in other states.

Full Paper: