Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Understanding Contrasts and Contexts

Thursday, November 12, 2015 : 4:10 PM
Ibis (Hyatt Regency Miami)

*Names in bold indicate Presenter

Carolyn Hill and Virginia W. Knox, MDRC
As the evidence-based policy movement accelerates, policymakers, funders, researchers, and program managers face a conundrum:  how do we reconcile the rigorous estimation of program impacts with the realities of implementing an “evidence-based program” in different contexts? If only an evidence-based program could be taken off the shelf, implemented anywhere, and guaranteed to deliver impacts….

But any potential context for implementing an evidence-based program will be located somewhere along a continuum, from exhibiting an exact match on all possible characteristics as in the experimental setting, to exhibiting differences on all possible characteristics. Characteristics are those related to clients/participants, treatments planned, treatments delivered, other services available, the programs/providers, and the setting/context (Weiss, Bloom, & Brock, 2014); and they include both observed and unobserved factors. Implementing an evidence-based program in a specific context must, then, confront questions regarding what differences in characteristics matter, how much they matter, and for whom they matter.

These kinds of questions have been addressed through random assignment studies of particular program elements or approaches (e.g. Hamilton et al. 1997); through nonexperimental methods that seek to isolate mediators of program effects and/or to develop the assumptions needed for the validity of such estimates (e.g. Raudenbush, Reardon, & Tomi 2012; Imai, Tingley, & Keele 2009; Peck 2003); through nonexperimental modeling of the natural variation across sites in estimated program impacts (e.g., Bloom, Hill, & Riccio 2003); and through the development of frameworks and heuristics to guide model development, decisionmaking, and implementation (e.g. Weiss, Bloom, & Brock 2014; Bangser 2014; Fixsen et al. 2005; Damschroder et al. 2009; Durlak & DuPre 2008; Sandfort & Moulton 2015; Hill & Lynn 2015). In day-to-day program implementation, management information systems (MIS) and other sources offer some evidence about program operations that can support continuous quality improvement (CQI). Each of these sources and approaches can provide insights into implementation of evidence-based programs. But none of these approaches is, by itself, necessary or sufficient for ensuring robust implementation of evidence-based programs in real-world settings.

A necessary condition for broad implementation of evidence-based programs is the analytical capacity of program managers and technical assistance providers. Adapting an evidence-based program for particular participants in a particular program/organization in a particular context requires analytical skill and ongoing engagement. Developing that capacity is as important as developing the evidence base from which the implementers are expected to draw, and its importance is underemphasized in the current golden age of evidence-based policy.

In this paper, we review the major sources of knowledge noted above for implementing evidence-based programs. We argue that developing analytical capacity through understanding a counterfactual and its manifestations through treatment contrasts, client characteristics, and contexts is a necessary condition for widespread use of “evidence-based programs”; and we discuss challenges and opportunities involved with this endeavor.