Panel Paper: A Three-Armed, Multi-Site Evaluation Design’s Potential for Within Study Comparison and Policy Learning

Thursday, November 2, 2017
McCormick (Hyatt Regency Chicago)

*Names in bold indicate Presenter

Laura Peck, Eleanor L. Harvill and Douglas Walton, Abt Associates, Inc.


The impact evaluation of the first round of the Health Profession Opportunity Grants (HPOG) involved randomizing individuals across 42 HPOG programs to the HPOG treatment group or to a control group that does not have access to HPOG-funded services. In 19 of the HPOG programs, there were two treatment arms, which enables the evaluation to focus on a given program component’s relative impacts. In particular, one treatment group has access to HPOG while the second treatment group has access to HPOG enhanced with one of three additional program components: facilitated peer support groups, emergency assistance for specific needs, or noncash incentives. Being part of a larger evaluation, these three-armed evaluations permit a unique kind of within-study-comparison: the evaluation has both experimental and nonexperimental impact estimates of the program components that were offered only through rationed access in 19 programs.

The study team developed a new method for leveraging these experimental results in an attempt to learn more about the nonexperimental counterpart analyses. Called Cross-Site Attributional Model Improved by Calibration to Within-Site Individual Randomization Findings (CAMIC), the method proposes a strategy for bias-reduction in nonexperimental analyses of the impact of a program’s structure and implementation on overall program impacts. This presentation will elaborate on the logic and performance (in simulations) of the CAMIC method, and it will also discuss more broadly the lessons from HPOG’s implementation of the “enhancement design” and potential future benefits of using this design in practice.