Roundtable: Beyond Value Added: The Future of Measuring and Using Data on Educator Impact
(Education)

Thursday, November 3, 2016: 1:15 PM-2:45 PM
Columbia 6 (Washington Hilton)

*Names in bold indicate Presenter

Roundtable Organizers:  Steven Glazerman, Mathematica Policy Research
Moderators:  Steven Glazerman, Mathematica Policy Research
Speakers:  Douglas N. Harris, University of Wisconsin – Madison, Ryan Balch, My Student Survey, Elizabeth Warner, U.S. Department of Education and Alden Wells, District of Columbia Public Schools

This roundtable will explore how analysts are making better use of different types of data on teacher effectiveness to improve education. Increasingly, test-score-based growth or value added measures (VAMs) have been met with a wave of suspicion, embodied, for example, in official warnings by the American Statistical Association (ASA), the American Educational Research Association (AERA), and the National Association of Secondary School Principals (NASSP) and criticism from researchers and educators themselves. Many of the concerns raised about VAM validity and reliability are genuine issues, but these concerns are just as important for judging other informational inputs to teacher evaluation, such as classroom observations, student surveys, student learning objectives, or subjective principal ratings. The buzzword for dealing with all of these concerns is “multiple measures.” However, practitioners are realizing that the hard work revolves around how to balance multiple measures, how to assign them appropriate weight, how to interpret them, and how to link them appropriately to high- and low-stakes decisions. This roundtable will give the audience a chance to hear from both researchers and practitioners who have been dealing with the problem of how to generate and use multiple measures in creative yet statistically appropriate ways. The panelists include practitioners at both the local (district) and federal level (U.S. Department of Education) as well as researchers who work closely with school districts as providers of teacher evaluation services or as members of a district education research alliance. The analytic tools that the researchers bring to bear include Bayesian models, sequential decision-making, and loss functions. The problems that teacher evaluation data are meant to address include equitable distribution of teacher effectiveness, accountability for teacher preparation programs, teacher recruitment and retention policies, and targeted professional development. The end research products of the multiple measures teacher evaluation systems are not simply rectangular databases with teacher ID’s, scores, and confidence intervals. Instead, the goal is to use these multiple indicators and information on the uncertainty to populate insightful interactive data visualizations, dashboards, and customized reports that lead directly to action plans. We will hear about the panelists’ experiences – both their frustrations and success stories -- with measuring and using multiple measures of educator impact in the post-VAM era, and will broaden the discussion to incorporate the diverse perspectives of APPAM attendees of the roundtable.


See more of: Education
See more of: Roundtable