Panel Paper: Bias of Public Sector Worker Performance Monitoring: Theory and Empirical Evidence From Middle School Teachers

Saturday, November 10, 2012 : 4:30 PM
Hanover B (Radisson Plaza Lord Baltimore Hotel)

*Names in bold indicate Presenter

Douglas Harris and Andrew A. Anderson, University of Wisconsin at Madison

Accurately monitoring workers in some service sector jobs presents distinctive challenges because of selection bias in the assignment of clients to workers. The problem is particularly severe with public sector services where output is multidimensional and preferences for each dimension vary across clients, but where performance measures are standardized.  This creates a misalignment between actual and measured productivity that varies across workers. We test the degree to which variation in measured performance is due to misalignment versus selection bias in a statewide sample of middle schools where students and teachers are assigned to explicit “tracks,” reflecting heterogeneous student ability and/or preferences.  We find that failing to account for tracks leads to large biases in teacher value-added estimates.  A teacher of all lower track courses whose measured value-added is at the 50th percentile could increase her measured value-added to the 99th percentile simply by switching to all upper-track courses.  We estimate that 75-95 percent of the bias is due to student sorting and the remainder due to test misalignment. We also decompose the remaining bias into two parts, metric and multidimensionality misalignment, which work in opposite directions.  Even after accounting for explicit tracking, the standard method for estimating teacher value-added may yield biased estimates.