Indiana University SPEA Edward J. Bloustein School of Planning and Public Policy University of Pennsylvania AIR American University

Panel Paper: Design-Based Methods for Assessing Treatment Effect Heterogeneity

Thursday, November 12, 2015 : 10:55 AM
Pearson I (Hyatt Regency Miami)

*Names in bold indicate Presenter

Luke Miratrix, Harvard University
This paper extends the Neymanian framework to explicitly allow both for treatment effect variation explained by covariates, known as the systematic component, and for unexplained treatment effect variation, known as the idiosyncratic component. This perspective enables estimation and testing of impact variation without imposing a model on the marginal distributions of potential outcomes, with the workhorse approach of regression with interaction terms being a special case.

Systematic variation has immediate ties to moderation analyses, a key example of heterogeneous treatment effects in applied research.  Our framework immediately motivates moderation analysis, which can be thought of as exploring the systematic component of treatment effect variation.  This extension of the basic Neymanian approach to average effect estimation naturally allows for very minimal-assumption estimates of these systematic effects.  A consequence of this work is proving that in fact modeling systematic effects with interacted linear models is justified by the randomization itself.

Classic analyses, however, do not readily extend to idiosyncratic variation, i.e., the variation left over once any systematic component has been modeled.  Assessing idiosyncratic variation in randomized experiments is critical for truly understanding treatment effect variation and, importantly, is conspicuously absent from existing methods.  We will present a method for testing for the presence of meaningful idiosyncratic treatment effect variation by utilizing a Fisher Randomization Test, which is an alternate analytic approach under the randomization inference umbrella.

This framework leads to two other practical results. First, estimates of systematic impact variation give sharp bounds on overall treatment variation as well as bounds on the proportion of total impact variation explained by a given model—this is essentially an R2 for treatment effect variation. Second, by using covariates to partially account for the correlation of potential outcomes, we sharpen the bounds on the variance of the unadjusted average treatment effect estimate itself. As long as the treatment effect varies across observed covariates, these bounds are sharper than the current sharp bounds in the literature.

These ideas will be demonstrated with the Head Start Impact Study, a large randomized evaluation in educational research, showing that this approach is meaningful in practice.