Panel Paper: Plans and Rationale for the Human Services Research Initiative Prize: Promoting Small-Scale Rigorous Evaluations for Policy and Management

Friday, November 8, 2013 : 8:00 AM
DuPont Ballroom G (Washington Marriott)

*Names in bold indicate Presenter

Thomas Gais, State University of New York, Albany and Michael Wiseman, George Washington University
In this paper, we describe plans for a national competition and prize for exceptional government efforts to incorporate research in the management of human services. The prize will be awarded to local and state government agencies for well-developed plans for rigorous evaluations of alternative strategies for carrying out common public functions. The program is intended to encourage inexpensive yet serious impact analyses (such as randomized control trials or RTCs) closely connected to the routine operations of public agencies.

We argue that such analyses are essential for improving the evidence base for innovative yet incremental changes in government programs and management approaches in human services, particularly now. Most state and local governments do not have the resources to implement the traditional model of large-scale evaluations—which have typically been expensive, time-consuming, politically challenging, and difficult to manage in the context of day-to-day government operations.

Yet many elected leaders and public administrators want accountability for “results.” Usually they look at changes in “performance” measures to assess programs and agencies, but performance measures do not estimate impacts. Our goal is to promote an ongoing accountability for impacts, though one that recognizes the limited resources of public agencies. We expect that most contestants will take advantage of the expanding range of administrative data available for many programs, and that many contestants will build partnerships with academic institutions to formulate and implement evaluation plans.

The paper outlines this new model for rigorous evaluations in human services. It summarizes several strands of research to show the importance of and potential gains from using RCTs to assess the impact of modest changes in the tactics and strategy of government activity—and reveals how these analyses can be done quickly and at comparatively low cost. The paper also draws on examples discussed at a “Research Academy” held as part of recent joint meetings of the National Association for Welfare Research and Statistics (NAWRS) and the National Association of State TANF Administrators. And it relies on other examples of small-scale research initiatives in the public sector and the growing role of RCT evaluations in business.

The paper outlines overall plans for the competition, including criteria for selection. Tentative criteria include: 1) emphasis on changes in the routine functions of government; 2) criteria established by a working group of practitioners, academics, and evaluation professionals; 3) preference for experiments that increase agency capacity for policy analysis; 4) preference for short time horizons; and 5) evidence that the plan has a reasonable chance of being implemented. The paper summarizes the literature on prizes and their effectiveness in spurring innovation and institutional change and indicates how findings from this research apply. The paper also discusses the intended effects of the initiative, including the rebuilding of the evidence base for human service policies, a greater role for rigorous impact evaluations in assessing management, as well as faster, more frequent cycles of innovation, assessment, and modification.

Full Paper: