Panel: Measuring Teacher Effectiveness
(Education)

Thursday, November 6, 2014: 10:15 AM-11:45 AM
Aztec (Convention Center)

*Names in bold indicate Presenter

Panel Organizers:  Andrew Bibler, Michigan State University
Panel Chairs:  Allison Atteberry, University of Colorado - Boulder
Discussants:  Andrew McEachin, North Carolina State University


Toward Improving Measures of Teacher Effectiveness: Identifying Invalid Reponses in Student Surveys of Teacher Practice
Ryan Balch, Vanderbilt University and Joseph Robinson-Cimpian, University of Illinois, Urbana-Champaign



The Predictive Power and Reliability of Demonstration Lessons to Identify Effective Teachers
Alejandro J. Ganimian1, Andrew D. Ho1 and Mariana Alfonso2, (1)Harvard University, (2)Inter-American Development Bank



What Types of Teachers Improve Students' Character Skills?
Seth Gershenson1, Katie Vinopal1 and Michael S. Hayes2, (1)American University, (2)Rutgers University, Camden



Precision for Policy: Calculating Standard Errors in Value-Added Models
Andrew Bibler1, Kelly N. Vosters1, Cassandra Guarino2, Jeffrey Wooldridge1 and Mark Reckase1, (1)Michigan State University, (2)Indiana University


Over the past twenty years, a consensus has emerged that teachers are the most important school-based predictor of student achievement, that effective teachers can offset learning disadvantages associated with students’ background, and that teachers who are successful at improving students’ academic achievement also impact students’ long-term outcomes such as college enrollment and labor market wages. Yet, our understanding of what constitutes effective teaching and how we should identify it is evolving rapidly. Two papers on this panel expand the definition of effective teaching and the other two offer practical guidance on how to improve the reliability of two measures of effective teaching: student surveys and value-added models. All papers respond to pressing policy questions related to the hiring and assessment of teachers. Ganimian, Ho and Alfonso respond to the unsuccessful track record of educational research in identifying indicators that would predict effective teachers before they are hired. Using a randomized field trial in Argentina, it asks whether teachers who perform better on demonstration lessons that they deliver at time of hire better once they enter the classroom, as measured by student surveys, students’ grades, principal surveys, classroom observations, and a character report card. It also asks how many lessons, raters, and assessment tasks are necessary to obtain reliable scores for these demonstration lessons. Gershenson, Hayes, and Vinopal respond to longitudinal studies in economics and education that find effective teachers impact students’ character skills and asks whether we can identify the observable characteristics of teachers who improve students’ character skills and related non-test score outcomes. Using a nationally representative survey and longitudinal administrative data of North Carolina’s public schools, the authors use a combination of school fixed effects and a rich set of covariates to identify the causal effect of observable teacher characteristics (e.g., experience, college selectivity, college major, National Board Certification, licensure status, Praxis test scores, and the instructional strategies) on students’ character skills (e.g., persistence, attendance, study habits, motivation, and self control). Balch and Robinson-Cimpian respond to the criticism that students may not take surveys of teachers’ practices seriously and proposes five techniques to identify invalid responses. Using data from Georgia and Baltimore, the authors compare the merits of including identifying patterns of outlier answers based on differences from classroom averages, asking students directly above their honesty, requiring a minimum standard deviation in answer choices, testing incongruent answer choices, and identifying students who are mischievous responders. The last paper responds to the practical issue of how to reliably estimate effective teachers using value-added models, given that the nested nature of administrative education data leads to several clustering options when computing the standard errors. The authors use simulated student achievement data to study the behavior of standard errors for teacher value-added estimates under various types of student and teacher assignment mechanisms. Using student-level administrative data they find that standard errors can be quite sensitive to the formula one chooses to calculate them, showing that the policy decisions made using the value-added estimates may depend on this formula.
See more of: Education
See more of: Panel