Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: What Counts As Evidence? the Misalignment of Evaluation in the Policy Cycle

Friday, November 13, 2015 : 1:50 PM
Brickell Center (Hyatt Regency Miami)

*Names in bold indicate Presenter

Daniela Schroeter and Gregory D. Greenman II, Western Michigan University
The stage heuristic model of the policy cycle suggests that evaluation is solely a summative process and occurs after policy implementation. This paper argues that this placement is not only incorrect, but also detrimental to the overall success of a policy. Though the stage heuristic model is but one of many frameworks for policy analysis, it is the dominant model taught in introductory policy courses. None the 8 other most common  policy analysis frameworks, discussed at length by Weible and Sabatier (2014), place evaluation at another place in the policy process or suggest its application in policy formulation or implementation. This reliance on the Lasswellian placement of evaluation creates challenges for the evaluation of novel policies, thereby inhibiting responsive policy development and reaction to changing conditions in a timely manner.

Evaluation is a multidimensional transdiscipline that centers on the systematic determination of merit, worth, and/or significance of policy, programs, projects, products, and other evaluands as well as dimensions or components thereof. Rather than just being a summative effort to determine the success of a policy, evaluation provides options throughout the policy cycle. It helps decision makers determine what policies or programs can address a societal issue (needs assessment), whether it is being operated appropriately (process or implementation evaluation), whether the objectives are being met (outcome evaluation), and what intended and unintended effects the program has (impact evaluation). To that end, the discipline of evaluation hosts a growing set of evaluation models and approaches each of which produce evidence in varying shapes and forms. These models and approaches include expertise-oriented, consumer-oriented, goal-based and goal-free, theory-driven, decision-oriented, questions- and methods-driven, developmental and utilization-focused, and participatory approaches, to name but a few. Moreover, these models and approaches have implications for different stages in the policy life cycle. Combined, they emphasize a range of methods and allow for incorporation of a range of research designs. In contrast, the evidence-based practice movement clearly favors certain designs and methods over others, focusing solely on the efficacy of models, and thwarting innovation at the beginning of the policy cycle.

In essence, the most recent policy discussions emphasize evidence-based practices that overemphasize summative intervention efficacy over developmental, formative approaches that foster innovation, improvement, growth, and betterment. To that end, an expansive literature on evidence-based practices is emergent and evidence-based registers that show “what works” are growing. This paper includes three cases that highlight different aspects of the evidence-based practice movement in relation to the policy cycle. First, characteristics of the landscape of evidence-based practice registers in behavioral health are highlighted (Schroeter et al, 2015). Second, challenges of randomized controlled trials for testing innovation are discussed in relation to the recent Department of Education’s First In The World program. Third, biased accountability structures in the dissemination of findings from evaluation via clearinghouses, peer-reviewed publications, and gray literature are emphasized in relation to a recent study of program evaluation effectiveness principles in elementary science professional development.