Panel Paper: Insights from the Health Profession Opportunity Grant Program’s Three-Armed, Multi-Site Experiment for Program and Policy Learning and Evaluation Practice

Friday, November 9, 2018
Marriott Balcony B - Mezz Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Laura Peck1, Nicole Constance2 and Hilary Forster2, (1)Abt Associates, Inc., (2)U.S. Department of Health and Human Services


Multi-site evaluations provide opportunities to exploit both natural and planned variation to learn what about a program associates with its impacts. Prior analyses have considered this cross-site variation qualitatively (e.g., Riccio, Friedlander, & Freedman, 1994) and non-experimentally (e.g., Bloom, Hill, & Riccio, 2003; Greenberg, Meyer, & Wiseman, 1993). However, a recent, major national evaluation has done so using an innovative evaluation design, which added a second treatment arm in several locations as a way to identify key aspects of the multi-faceted program’s impact.

Funded in 2010, the first round of the Health Profession Opportunity Grants (HPOG 1.0) Program provided occupational health sector training to TANF recipients and other low-income individuals. It did so both to improve the employment prospects of these individuals and to meet local demand for workers in the healthcare sector. HPOG operated within a career pathways framework, with rich support services to help participants complete trainings articulated along a career trajectory and with employer connections to help make the trainings relevant and practical for participants and to facilitate obtaining employment. The HPOG 1.0 Impact Study includes 23 of the 32 first-round grantees, which operated 42 distinct local HPOG programs. The study enrolled a sample of 13,717 study participants in treatment and control groups whose outcomes were compared to estimate the impact of the collection of programs.

The evaluation gave programs the opportunity to receive additional funding to implement one of three selected program components on the condition that they would ration access to the component, in support of providing experimental evidence on the component’s contribution to overall program impacts. In total, 19 of the 42 programs ran one of these three-armed experiments, in which one of the selected program components was added to the standard HPOG program. From a design perspective, what is distinctive about the HPOG 1.0 Impact Study is that it randomized access to selected program components, to be evaluated on top of a standard program.

This paper will consider how HPOG implemented this kind of experimental test, one in which a subset of many sites, all running multi-faceted programs, take on a second treatment arm. It will explore: (1) what criteria should apply when choosing program components to evaluate?; and (2) what are considerations for implementing those criteria, including choosing the right components and the right study sites? The presentation will include discussion of the evaluation’s findings and implications for future evaluation research, practice, and policy.

Full Paper: