Panel Paper: Comparing Impact Variation across Experimental and Non-Experimental Designs: Insights from the Health Profession Opportunity Grants

Thursday, November 7, 2019
Plaza Building: Concourse Level, Governor's Square 10 (Sheraton Denver Downtown)

*Names in bold indicate Presenter

Daniel Litwok, Abt Associates, Inc.


Recent literature uses non-experimental methods to explore impact variation in an attempt to “open the black box” that is an experimental impact estimate. This study compares the relative performance of these non-experimental methods to an experimental benchmark from an evaluation that randomly assigned a subset of treated individuals to an additional program characteristic. Studies of this nature fall into the within-study comparison literature; this study uses data from the Health Profession Opportunity Grants (HPOG) Impact Study for its within-study comparison.

The HPOG Program was authorized by Congress to offer training opportunities to TANF recipients and other low-income adults while also fulfilling the growing demand for a skilled workforce in the healthcare sector. The Administration for Children and Families (ACF) awarded the first round of HPOG grants to 32 grantees in 2010. Each of these programs offered its own unique blend of training and services that broadly fell within ACF’s program requirements. The ACF-funded impact study of this first round of grantees, or the HPOG Impact Study, is an experimental evaluation of 23 of these grantees’ programs.

In addition to learning the impact of HPOG overall, the HPOG Impact Study was also designed to isolate the impact of three specific program characteristics (access to emergency assistance, facilitated peer support, and non-cash incentives). To do so, the HPOG Impact Study offered one of these program characteristics to a random subset of individuals in some of the programs that did not include the characteristics as part of their standard programming. In the 10 grantees with randomized enhancements, individuals were assigned to either “enhanced” HPOG, “standard” HPOG, or control.

This study compares the difference in impacts for those who receive HPOG with a program enhancement and the standard HPOG program (both as compared to no HPOG at all). This comparison implies the impact of interest is the impact of the treatment on the treated (TOT) for the tested program enhancements. To leverage experimental variation, I apply instrumental variables (where random assignment is the instrument) to data from the HPOG Impact Study. To leverage non-experimental variation, I estimate the same impacts but ignore the random assignment to program enhancements. Instead, I use variation in observable characteristics of those treated individuals who “take up” the enhancements to explore impact heterogeneity. Specifically, I estimate impacts using: ordinary least squares, propensity score matching, and analysis of symmetrically predicted endogenous subgroups.

After estimating these impacts I compare the results using techniques described in recent literature on the methodology of within-study comparisons. I use the results of this comparison to draw conclusions about the overall performance of the non-experimental methods and the TOT impact of the HPOG enhancements. These conclusions are an important contribution to both scholarly research and the field of applied public policy evaluation.