Panel Paper: Straight from the Source: What Can Preservice Surveys Tell Us about Future Teacher Quality

Saturday, November 10, 2018
Marriott Balcony A - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Bingjie Chen1, James Cowan1, Dan Goldhaber2 and Roddy Theobald1, (1)American Institutes for Research, (2)University of Washington


The Massachusetts Department of Elementary and Secondary Education (ESE) has invested in innovative and internally aligned teacher evaluation systems that span the entire teacher pipeline—from teacher candidate certification through professional licensure. This system of performance measures includes surveys of supervising practitioners (the inservice teachers responsible for supervising the preservice internship) and teacher candidates during the preservice clinical internship, surveys of first-year teachers and their principals, and the state’s summative performance assessment of inservice teachers. The breadth of these performance indicators and the close alignment of each of these measures with the state’s Professional Standards for Teaching makes this system unique among newly developed state teacher evaluation systems.

Although this system has the potential to facilitate timely, data-driven policy and licensure decisions, there is currently a limited research base on the relationship between several of these measures and outcomes for teachers and students. Moreover, even where this literature does exist, there is limited evidence on the potential consequences of using such measures to evaluate educator preparation programs. Therefore, in this paper we investigate the relationships between new survey-based, preservice measures of teacher candidate outcomes and inservice performance measures of K–12 teachers, such as their contribution to student learning (value added) and performance evaluations.

Specifically, we will consider responses on Massachusetts’ teacher candidate and supervising practitioner surveys—both at the individual candidate level and aggregated to the teacher preparation program level—as predictors of teachers’ value added to student achievement and performance on the state’s summative teacher evaluation assessment. ESE developed these survey-based measures to support inferences about the impact of educator preparation programs on teacher candidate outcomes, so the analysis that focuses on program-level measures will inform the extent to which these measures provide meaningful information about educator preparation program quality. That said, given that there is currently little empirical research directly connecting similar measures to outcomes for individual teachers, the candidate-level analysis will provide evidence about whether these measures contain a meaningful signal about future teacher quality that, if it exists, could inform efforts to identify effective teachers before they enter the teaching workforce.

We expect that this research will be of broad interest to policymakers nationwide. While Massachusetts is a leader in aligning preservice and inservice performance measures, many other states have proposed or are developing new assessments for teacher candidates or methods for monitoring program effectiveness. Moreover, the U.S. Department of Education has released draft regulations that would require educator preparation programs to report data on the workforce performance of their graduates, and The Council for the Accreditation of Educator Preparation has also released recommendations for program evaluation and accreditation using inservice performance measures. Both plans recommend measures similar to several of those adopted by Massachusetts, including completers’ influence on student achievement, performance on teacher evaluation frameworks, satisfaction with their program, and employer evaluations. Therefore, this paper will provide some of the first large-scale analysis of several indicators of educator preparation quality that are being considered in states across the country.