Panel Paper: Implications of Rigorous Scientific Evidence Standards for the Design of Impact Evaluations

Saturday, November 8, 2014 : 10:55 AM
Apache (Convention Center)

*Names in bold indicate Presenter

Beth Boulay1, Cristofer Price1, Barbara Goodson2, Robert Olsen3, Eleanor L. Harvill4, Katherine N. Gan4 and Michael Frye4, (1)Abt Associates, (2)Dillon-Goodson Research Associates, (3)Rob Olsen LLC, (4)Abt Associates, Inc.
The field of program evaluation has been concerned with understanding how to interpret, assess, and communicate the quality of evidence on the effectiveness of social programs.  As a research community, we spend substantial time thinking through how to produce estimates of effectiveness that are internally valid.  Initial and ongoing efforts systematically assess and report on the quality of evidence across studies and are led by organizations like the Campbell Collaboration and government entities like the What Works Clearinghouse at the U.S. Department of Education (ED). These efforts focus on the end-point of the studies:  how the findings were generated and the size of the effects.  However, “you can’t fix by analysis what you bungled by design” (Light, Singer, and Willett, 1990). Many key decisions that determine a study’s potential to produce high quality evidence are made early in the design phase.  Work being done as part of recent federal initiatives has tested the possibility of applying standards for strength of evidence to research designs, well before the final results are produced.  

This paper is a practical introduction to how key criteria from these rigorous scientific evidence standards can be applied at the design phase.  We identify the critical up-front design decisions that lead to demonstrably higher-quality evidence. This paper draws on the experience of ED’s National Evaluation of Investing in Innovation (NEi3) which provides technical assistance to support the local evaluators as they design and conduct rigorous research and assesses the potential of evaluator’s planned designs to meet evidence standards. Key contributions include a set of recommendations to ensure the study has the potential to meet quality standards related to the formation of treatment and comparison groups, attrition, baseline equivalence and outcome standards. These recommendations include how to avoid common pitfalls associated with frequently-employed research designs.

This paper would fit nicely with papers that address the following: using rigorous evidence to drive decision making, systematic review/meta-analysis, tiered evidence programs, scientific evidence standards, or assessing the quality of evidence.