Panel Paper: How Three-Arm Random Assignment within Sites Can Improve Non-Experimental Cross-Site Estimates of the Relationship between Program Characteristics and Impact

Thursday, November 8, 2018
Lincoln 3 - Exhibit Level (Marriott Wardman Park)

*Names in bold indicate Presenter

Laura Peck and Shawn Moulton, Abt Associates, Inc.

Three-arm experimental evaluations can be used to test whether cross-site analyses of the impact contribution of certain program components are accurate. To move forward scientific methods, the current paper introduces a new methodology dubbed “Cross-Site Attributional Model Improved by Calibration to Within-Site Experimental Evidence,” or CAMIC. The method leverages the presence of three-arm random assignment sites embedded within a conventional two-arm multi-site impact evaluation. CAMIC expands the opportunities afforded by have three experimental arms to identify factors that produce larger social program impacts and does so with only minimal statistical challenges.

The emergence of CAMIC began with the observation that randomization into three arms—with basic treatment, basic treatment plus an added program component, and control group—makes possible both experimental and nonexperimental estimates of the impact contribution of the randomized-to program component. Aligning the nonexperimental finding to the experimental finding through adjustments to the cross-site attributional model—the essence of CAMIC—can be expected to improve the model’s ability to measure impacts of all program components, including those not varied at random. Furthermore, the success of modeling impact variation in this way can be tested if a study uses three-arm randomization to vary more than one program component—i.e., when a third randomized arm in some sites adds program component A to the basic intervention and a third randomized arm in other sites adds program component B to the basic intervention.

This paper will describe the logic and performance (in simulations) of the CAMIC method, using the Health Profession Opportunity Grants (HPOG) evaluation as an example of its potential to expand policy learning when evaluators and policymakers seek ways to improve—not just assess, up or down—studied social programs. Examining impact variation across sites in an attempt to learn which program components lead to larger impacts is something of obvious policy importance. But as is well known, nonexperimental attribution of site-to-site impact differences to potential causal factors—both program features and contextual and participant characteristics—can give misleading guidance to program designers wishing to maximize the impact of future evidence-based interventions. If the evaluator omits from the analytic model determinants of impact that correlate with included program features or makes mistaken functional form assumptions, then the wrong conclusions will be reached—and the error likely will never be detected. CAMIC reduces the initial risk, and provides the opportunity to test for any remaining error. Reliance on cross-site modeling of impacts for policy guidance can thus be strengthened and assessed all in one study.

Full Paper: