Panel: Testing-Instrument Issues and Value Added Models

Saturday, November 10, 2012: 3:30 PM-5:00 PM
Hanover B (Radisson Plaza Lord Baltimore Hotel)

*Names in bold indicate Presenter

Organizers:  Cory Koedel, University of Missouri-Columbia
Moderators:  Tim Sass, Florida State University and Brian Gill, Mathematica Policy Research
Chairs:  Seth Gershenson, American University

Value-added models (VAMs) are increasingly used in research and policy applications to evaluate schools and teachers. VAMs are based on student scores on standardized tests, and the properties of the testing instruments can influence model performance. This is an understudied area in the value-added literature. The papers in this panel focus on two key issues related to the properties of standardized testing instruments and their implications for value-added modeling. First is the issue of test measurement error. While it is widely acknowledged that standardized tests are noisy measures of student learning, most VAMs do not directly account for imprecision in the tests. Furthermore, in instances where VAMs are adjusted to account for test measurement error, the adjustments are made using information from a limited research base. This panel covers methods by which currently available test-measurement-error metrics can be used to immediately improve VAM performance, as well as methods to better measure and model test measurement error moving forward. A second issue covered in the panel involves the interaction between testing-instrument properties and student tracking in higher grades. Researchers and policymakers are still trying to determine how VAMs can and should be used beyond the elementary level, and an important practical consideration in later grades is that students are more-strictly tracked by ability. The influence of testing-instrument properties on student scores (e.g., test-score ceilings), and the alignment between exam content and coursework, differ across tracks of students. This has important implications for the application of VAMs in later grades.

Test Measurement Error and Inference From Value-Added Models
Cory Koedel, Rebecca Leatherman and Eric Parsons, University of Missouri-Columbia

Measuring Test Measurement Error: A General Approach
Donald J. Boyd, University of Albany- SUNY, Hamilton Lankford, State University of New York, Albany, Susanna Loeb, Stanford University and James Wyckoff, University of Virginia

Bias of Public Sector Worker Performance Monitoring: Theory and Empirical Evidence From Middle School Teachers
Douglas Harris and Andrew A. Anderson, University of Wisconsin at Madison

See more of: Methods
See more of: Panel