Panel Paper: Re-Thinking the Complexity of Evaluation: Using Complex Systems Theory to Inform Evaluation Practice

Thursday, November 8, 2012 : 3:40 PM
Schaefer (Sheraton Baltimore City Center Hotel)

*Names in bold indicate Presenter

Margaret B. Hargreaves, Mathematica Policy Research


With increasing recognition that public policy issues inhabit a highly complex problem space with many relevant factors to consider, rapidly changing conditions, and ongoing adaptation obscuring the effect of policies, more researchers are turning to the field of policy informatics for answers. As evidenced by the creation of a new policy informatics area at APPAM, some researchers are beginning to use complex systems simulation models, including agent-based modeling and systems dynamics simulations, to support evidence-driven policy design. While these modeling methods can improve the use of predictive evidence in policy design, such methods are less appropriate for the retrospective evaluation of implemented policies and programs.

However, systems theories can also be used effectively to improve evaluation practice in many different areas, including child development, employment and training, healthcare reform, education, environmental management, and community development. This paper presents a framework of 17 systems thinking practices that encompass key systems concepts in four areas: (1) systems attributes (contested boundaries, diverse perspectives, and interdependent relationships), (2) multi-layered dynamics (nested systems, complex ecologies, and dynamical conditions), (3) causation in complex conditions (contribution vs. attribution, and heterogeneous impacts), and (4) being systemic as a researcher or evaluator. The paper uses each systems concept to re-think particular evaluation practices, and provides examples of how these concepts have been used to improve specific evaluations.

Several evaluations will be highlighted to illustrate the use of systems thinking in evaluation. For example, in an evidence-based home visiting program initiative, the evaluation team used system boundary concepts to re-think the distinction between evidence-based program models and their supporting infrastructure. The team also re-conceptualized home visiting systems not as centralized service delivery structures but as networks of collaborative partners. These changes affected the project’s evaluation design, methods, findings, and resulting federal funding practices. In another example, the evaluation team is using causal loop diagrams, natural experimental designs and other systems-based methods to better assess the contribution of technical assistance and training services to rural health system outcomes. The paper will also include additional evaluation examples from education, international development, peace building and environmental change initiatives.

The ultimate goal of this paper is to invite researchers and evaluators to add a “systems thinking” lens to their toolkit of evaluation techniques, in order to better capture and address the inherent complexity of their policy work. This approach is not meant to replace current evaluation practice with an alternate methodology, but to show evaluation practitioners and researchers how to recognize and assess the situational complexity of targeted policy areas and incorporate that understanding into their work. Additional resources will be provided for people interested in learning more about this new area of evaluation practice.