Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: New Analytic Opportunities to Learn about Program Variants from Multi-Site Experimental Evaluations

Saturday, November 14, 2015 : 8:30 AM
Pearson I (Hyatt Regency Miami)

*Names in bold indicate Presenter

Laura Peck1, Sarah D. Sahni2, Shawn R Moulton1,2, Edward Bein2 and Stephen Bell1, (1)Abt Associates, (2)Abt Associates, Inc.
The Health Profession Opportunity Grant (HPOG) program impact evaluation involves randomization of eligible applicants to the health-sector- and career-pathways-based training program to control and treatment groups.  In some places, individuals are randomized to two variants of the program (or to a control group), and in other places they are randomized simply to gain access the HPOG program or not. The second treatment arm that exists in some places permits focus on selected program components that are added to the base program explicitly for evaluation learning.  This design allows for the comparison of different “program worlds”: one in which the program adopts the component of interest (with either lottery or unrestricted access), and an alternative world where the program does not adopt the component of interest.  The contrast in outcomes between these two treatment groups reveals the contribution of the component as an add-on to the main program. That is, it shows the difference in impact that adding access to the component makes, given the already-existing program configuration. This is the best information for deciding whether to include the selected component as part of the standard program model going forward.  The HPOG-Impact Design Report and Analysis Plan documents (Harvill, Moulton & Peck, 2015; Peck, et al., 2014) detail the project’s plans for capitalizing on this unique design to learn more about what program components contribute to program impacts.  The proposed paper will pre-test those methods, either using the National Evaluation of Welfare-to-Work Strategies (NEWWS) or simulated data or both.  It will demonstrate use of the unbiased, purely experimental estimate of the impact that comes from the three-arm programs as a benchmark for reducing selection bias that might arise from evaluating the effectiveness of those program components non-experimentally, within programs that were configured to offer those components before the experiment went into effect.