Panel: The Measurement Challenge for Public Managers: Employing Better Data for Better Results
(Public and Non-Profit Management and Finance)

Thursday, November 2, 2017: 10:15 AM-11:45 AM
Hong Kong (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Panel Organizers:  Robert Behn, Harvard University
Panel Chairs:  John Perry, Retired City Manager
Discussants:  James R. Thompson, University of Illinois, Chicago

Biases in How Citizens Judget Government Performance: Experimental Findings from the US and Denmark
Gregg Van Ryzin1, Martin Bækgaard2, Oliver James3 and Søren Serritzlew2, (1)Rutgers University, (2)Aarhus University, (3)University of Exeter

Managing Social Services in an Era of Performance Management and Competition
Steven Rathgeb Smith, American Political Science Association

Public Managers have data—lots of data.  They have input data—everything from staffing levels to dollars spent.  They have activity data—all of the micro-action work undertaken by public employees, contractors, and collaborators.  They have output data—the number of potholes filled, the number of children vaccinated, the number of veterans treated for PTSD.  And they often collect survey data from citizens, collaborators, and staff.  

But outcome data—measures of what government has actually accomplished—is harder to collect, much harder to collect in a timely basis.  Managers in public education might like to have data on whether the children they educated grew up to be productive employees and responsible citizens (two possible outcome measures of a school system’s policies and programs).  Such data, however, could not be collected for decades (and only if these former students could be tracked down).  Moreover, such data might not be useful for principals and superintendents managing the current educational programs.

In addition to the long time delays in such feedback loops, public managers have a difficult time designing RCTs.  After all, if the treatment to be tested is a management strategy—that is, an idea, a concept—it isn’t obvious how those running the experiment can deny it to those managing the control group (or force the managers of a control group to faithfully implement a specific default strategy).  At the same time, what would be required of those managing the treatment group to ensure that they had faithfully implemented the management strategy being tested?

The four papers on this panel will address the measurement challenge facing public managers in different ways.  All of these papers are authored by scholars who have a been fully engaged in public-management research with a specific focus on the challenge of using data to improve performance. Some have been doing this for decades.

All of four of the papers on this panel are authored by scholars who have a been fully engaged for years in public-management research with a specific focus on the challenge of using data to improve performance.  Yet each examines a different aspect of this challenge.  Van Ryzin analyzes how citizens interpret data—how their prior preferences bias their conclusions about what data reveal about program effectivenes.  Jackman and Musso employ James Q. Wilson’s organizational typology of the public sector to examine how different agencies in Los Angeles implemented the city’s performance-measurement reforms.  Smith examines how the expectations for performance data are putting pressures on small, local nonprofit organizations (with little data collection activity).  Behn will explore and analyze various strategies for collecting better data about the effectiveness of leadership strategies and what such data might help accomplish.