Panel Paper:
What Counts As Evidence? the Misalignment of Evaluation in the Policy Cycle
*Names in bold indicate Presenter
Evaluation is a multidimensional transdiscipline that centers on the systematic determination of merit, worth, and/or significance of policy, programs, projects, products, and other evaluands as well as dimensions or components thereof. Rather than just being a summative effort to determine the success of a policy, evaluation provides options throughout the policy cycle. It helps decision makers determine what policies or programs can address a societal issue (needs assessment), whether it is being operated appropriately (process or implementation evaluation), whether the objectives are being met (outcome evaluation), and what intended and unintended effects the program has (impact evaluation). To that end, the discipline of evaluation hosts a growing set of evaluation models and approaches each of which produce evidence in varying shapes and forms. These models and approaches include expertise-oriented, consumer-oriented, goal-based and goal-free, theory-driven, decision-oriented, questions- and methods-driven, developmental and utilization-focused, and participatory approaches, to name but a few. Moreover, these models and approaches have implications for different stages in the policy life cycle. Combined, they emphasize a range of methods and allow for incorporation of a range of research designs. In contrast, the evidence-based practice movement clearly favors certain designs and methods over others, focusing solely on the efficacy of models, and thwarting innovation at the beginning of the policy cycle.
In essence, the most recent policy discussions emphasize evidence-based practices that overemphasize summative intervention efficacy over developmental, formative approaches that foster innovation, improvement, growth, and betterment. To that end, an expansive literature on evidence-based practices is emergent and evidence-based registers that show “what works” are growing. This paper includes three cases that highlight different aspects of the evidence-based practice movement in relation to the policy cycle. First, characteristics of the landscape of evidence-based practice registers in behavioral health are highlighted (Schroeter et al, 2015). Second, challenges of randomized controlled trials for testing innovation are discussed in relation to the recent Department of Education’s First In The World program. Third, biased accountability structures in the dissemination of findings from evaluation via clearinghouses, peer-reviewed publications, and gray literature are emphasized in relation to a recent study of program evaluation effectiveness principles in elementary science professional development.