*Names in bold indicate Presenter
We argue that such analyses are essential for improving the evidence base for innovative yet incremental changes in government programs and management approaches in human services, particularly now. Most state and local governments do not have the resources to implement the traditional model of large-scale evaluations—which have typically been expensive, time-consuming, politically challenging, and difficult to manage in the context of day-to-day government operations.
Yet many elected leaders and public administrators want accountability for “results.” Usually they look at changes in “performance” measures to assess programs and agencies, but performance measures do not estimate impacts. Our goal is to promote an ongoing accountability for impacts, though one that recognizes the limited resources of public agencies. We expect that most contestants will take advantage of the expanding range of administrative data available for many programs, and that many contestants will build partnerships with academic institutions to formulate and implement evaluation plans.
The paper outlines this new model for rigorous evaluations in human services. It summarizes several strands of research to show the importance of and potential gains from using RCTs to assess the impact of modest changes in the tactics and strategy of government activity—and reveals how these analyses can be done quickly and at comparatively low cost. The paper also draws on examples discussed at a “Research Academy” held as part of recent joint meetings of the National Association for Welfare Research and Statistics (NAWRS) and the National Association of State TANF Administrators. And it relies on other examples of small-scale research initiatives in the public sector and the growing role of RCT evaluations in business.
The paper outlines overall plans for the competition, including criteria for selection. Tentative criteria include: 1) emphasis on changes in the routine functions of government; 2) criteria established by a working group of practitioners, academics, and evaluation professionals; 3) preference for experiments that increase agency capacity for policy analysis; 4) preference for short time horizons; and 5) evidence that the plan has a reasonable chance of being implemented. The paper summarizes the literature on prizes and their effectiveness in spurring innovation and institutional change and indicates how findings from this research apply. The paper also discusses the intended effects of the initiative, including the rebuilding of the evidence base for human service policies, a greater role for rigorous impact evaluations in assessing management, as well as faster, more frequent cycles of innovation, assessment, and modification.
Full Paper:
- GaisWiseman131105.pdf (242.6KB)