Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Measuring Student Perceptions of Teacher Pedagogical Effectiveness: The Development of a New Student Survey Tool

Saturday, November 14, 2015 : 8:50 AM
Jasmine (Hyatt Regency Miami)

*Names in bold indicate Presenter

Beth Schueler1, Joseph McIntyre1, Won Suh2, Bryan Mascio3 and Hunter Gehlbach1, (1)Harvard University, (2)Panorama Education, (3)Harvard Univesity
One fascinating finding to come out of the Measures of Effective Teaching (MET) project was that students’ perceptions of their teachers’ effectiveness, as measured by student surveys, were predictive of the growth students made in that teacher’s class on standardized exams. Not only were student survey measures correlated with test-based value-added, but they had higher reliability from class to class than test-based value-added measures or scores on classroom observation rubrics (Kane, 2012). Student surveys come with a number of additional potential benefits. They are relatively inexpensive to administer and can be utilized in non-tested grades and subjects. As of 2013, 12 states either allowed or required the use of student surveys as a component of teacher evaluations (National Council on Teacher Quality, 2013).

This paper describes the development of a new survey tool designed to measure 6th-12thgrade students’ perceptions of a teacher’s pedagogical effectiveness. This scale was created as part of a larger tool—the now open-source Panorama Student Survey--designed to measure student perceptions of their schools, classes, teachers, and of themselves as learners. To develop the tool, we relied on Gehlbach and Brinkworth’s (2011) rigorous, six-step process for designing survey instruments. We illustrate how this process allowed us to improve measurement validity by combining feedback from both academics and potential respondents, through an expert review procedure, as well as focus groups and cognitive pretesting with middle and high school students. Additionally, when crafting the text of our items, we relied on often-overlooked, research-based practices. For example, we wrote items as questions rather than statements, used construct specific response anchors, avoided agree-disagree response anchors, and avoided double-barreled questions (Artino, Gehlbach & Durning, 2011). 

After developing our survey items, we conducted two studies that provide evidence that our scale appears to function effectively. Panorama Education administered the survey scales to students across the State of North Carolina (n=2,665), in partnership with the North Carolina Department of Public Instruction, and to a single large suburban high school outside of Phoenix, Arizona (n=2,995). Both surveys were conducted in the Spring of 2014. We used confirmatory factor analysis to establish a single-factor structure for our scale. In addition, our paper provides evidence that the scale has strong internal consistency, captures ample variation between respondents, and demonstrates evidence of both convergent and divergent validity of inferences based on correlations with other pre-existing scales designed to measure similar and distinct constructs.

For a subset of the responses from North Carolina, we were able to link the survey data with administrative, student-level achievement data provided by the North Carolina Education Research Data Center (NCERDC). We used these data to examine whether student perceptions of teacher effectiveness, as measured by our new tool, were correlated with teacher value-added on standardized math and English Language Arts exams. Finally, we use our tool to illustrate the potential for student surveys to be used as formative assessments that can help teachers and their coaches identify areas of strength and areas for improvement.